hadoop集群

端午节,无聊试试,hadoop集群。部署成功,相关资料,记录下来,仅供本身参考~html

master 192.168.234.20java

node1 192.168.234.21node


vi /opt/modules/hadoop/hadoop-1.0.3/conf/core-site.xmllinux

vi /opt/modules/hadoop/hadoop-1.0.3/conf/hdfs-site.xmlapache

vi /opt/modules/hadoop/hadoop-1.0.3/conf/mapred-site.xml安全


mkdir -p /opt/data/hadoop/bash

mkdir -p /opt/data/hadoop/mapred/mrlocalssh

mkdir -p /opt/data/hadoop/mapred/mrsystemjsp

mkdir -p /opt/data/hadoop/hdfs/nameide

mkdir -p /opt/data/hadoop/hdfs/data

mkdir -p /opt/data/hadoop/hdfs/namesecondary

chown -R hadoop:hadoop /opt/data/*


#格式化文件

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop namenode -format

#启动 Master node :

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start namenode

#启动 JobTracker:

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start jobtracker

#启动 secondarynamenode:

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start secondarynamenode

#启动 DataNode && TaskTracker:

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start datanode

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh start tasktracker



/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop namenode

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop jobtracker

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop secondarynamenode

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop datanode

/opt/modules/hadoop/hadoop-1.0.3/bin/hadoop-daemon.sh stop tasktracker



http://master:50070

http://master:50030/


http://192.168.80.200:50070/dfshealth.jsp


http://node1:50070

http://node1:50030/


rm -r /tmp/hsperfdata_root/*

rm -r /tmp/hsperfdata_hadoop/*


ll /opt/modules/hadoop/hadoop-1.0.3/logs/hadoop-hadoop*

rm -r /opt/modules/hadoop/hadoop-1.0.3/logs/hadoop-hadoop*


hadoop用户,删除/tmp后,登陆报错

GDM could not write to your authorization file. This could mean that you are out of disk space or that your home directory could not be opened for writing. Please contact your system administrator。

用管理员登陆:root

chown hadoop:hadoop /tmp



==========core-site.xml==========

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://master:9000</value>

</property>

<property>

<name>fs.checkpoint.dir</name>

<value>/opt/data/hadoop/hdfs/namesecondary</value>

</property>

<property>

<name>fs.checkpoint.period</name>

<value>1800</value>

</property>

<property>

<name>fs.checkpoint.size</name>

<value>33554432</value>

</property>

<property>

<name>io.compression.codecs</name>

<value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec</value>

</property>

<property>

<name>fs.trash.interval</name>

<value>1440</value>

</property>

</configuration>

==============hdfs-site.xml==============

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<configuration>

<property>

<name>dfs.name.dir</name>

<value>/opt/data/hadoop/hdfs/name</value>

<!--HDFS namenode p_w_picpath 文件保存地址-->

<description>

</description>

</property>

<property>

<name>dfs.data.dir</name>

<value>/opt/data/hadoop/hdfs/data</value>

<description>

</description>

</property>

<property>

<name>dfs.http.address</name>

<value>master:50070</value>

</property>

<property>

<name>dfs.secondary.http.address</name>

<value>node1:50090</value>

</property>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

<property>

<name>dfs.datanode.du.reserved</name>

<value>1073741824</value>

</property>

<property>

<name>dfs.block.size</name>

<value>134217728</value>

</property>


<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

==========mapred-site.xml==========

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>master:9001</value>

</property>

<property>

<name>mapred.local.dir</name>

<value>/opt/data/hadoop/mapred/mrlocal</value>

<final>true</final>

</property>

<property>

<name>mapred.system.dir</name>

<value>/opt/data/hadoop/mapred/mrsystem</value>

<final>true</final>

</property>

<property>

<name>mapred.tasktracker.map.tasks.maximum</name>

<value>2</value>

<final>true</final>

</property>

<property>

<name>mapred.tasktracker.reduce.tasks.maximum</name>

<value>1</value>

<final>true</final>

</property>


<property>

<name>io.sort.mb</name>

<value>32</value>

<final>true</final>

</property>


<property>

<name>mapred.child.java.opts</name>

<value>-Xmx64M</value>

</property>

<property>

<name>mapred.compress.map.output</name>

<value>true</value>

</property>

</configuration>



http://www.cnblogs.com/jdksummer/articles/2521550.html

博客园博问闪存首页新随笔联系管理订阅 随笔- 0  文章- 82  评论- 3

linux下设置ssh无密码登陆

ssh配置  


主机A:10.0.5.199

主机B:10.0.5.198

须要配置主机A无密码登陆主机A,主机B

先确保全部主机的防火墙处于关闭状态。

在主机A上执行以下:

 1. $cd ~/.ssh

 2. $ssh-keygen -t rsa  --------------------而后一直按回车键,就会按照默认的选项将生成的密钥保存在.ssh/id_rsa文件中。

 3. $cp id_rsa.pub authorized_keys

        这步完成后,正常状况下就能够无密码登陆本机了,即ssh localhost,无需输入密码。

 4. $scp authorized_keys summer@10.0.5.198:/home/summer/.ssh   ------把刚刚产生的authorized_keys文件拷一份到主机B上.  

 5. $chmod 600 authorized_keys      

     进入主机B的.ssh目录,改变authorized_keys文件的许可权限。

   (4和5能够合成一步,执行:  $ssh-copy-id -i summer@10.0.5.198 )


正常状况下上面几步执行完成后,从主机A所在机器向主机A、主机B所在机器发起ssh链接,只有在第一次登陆时须要输入密码,之后则不须要。


可能遇到的问题:


1.进行ssh登陆时,出现:”Agent admitted failure to sign using the key“ .

  执行: $ssh-add

  强行将私钥 加进来。

2.若是无任何错误提示,能够输密码登陆,但就是不能无密码登陆,在被链接的主机上(如A向B发起ssh链接,则在B上)执行如下几步:

  $chmod o-w ~/

  $chmod 700 ~/.ssh

  $chmod 600 ~/.ssh/authorized_keys

3.若是执行了第2步,仍是不能无密码登陆,再试试下面几个

  $ps -Af | grep agent

       检查ssh代理是否开启,若是有开启的话,kill掉该代理,而后执行下面,从新打开一个ssh代理,若是没有开启,直接执行下面:

      $ssh-agent

  仍是不行的话,执行下面,重启一下ssh服务

      $sudo service sshd restart

4. 执行ssh-add时提示“Could not open a connection to your authenticationh agent”而失败

执行: ssh-agent bash

====================================================================================================

error: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /tmp/hadoop-*/mapred/system. Name node is in safe mode.

请不要急,NameNode会在开始启动阶段自动关闭安全模式,而后启动成功。若是你不想等待,能够运行:

bin/hadoop dfsadmin -safemode leave 强制结束。

====================================================================================================

相关文章
相关标签/搜索