首先是基于伪分布的安装:node
http://my.oschina.net/repine/blog/267698 安全
若是你还不会,我想说:不太可能,加油!do it 负载均衡
(1)执行命令hostname cloud4 修改会话中的hostnamessh
(2)验证:执行命令hostnameoop
(3)执行命令vi /etc/sysconfig/network 修改文件中的hostnamespa
(4)验证:执行命令reboot -h now 重启机器.net
(1)执行命令vi /etc/hostsorm
在文本最后增长一行192.168.80.101 cloud41blog
(2)验证:ping cloud4递归
(3)在window中配置:主机名对应的ip
C:\Windows\System32\drivers\etc\hosts
新建就要配置
//(1)执行命令ssh-keygen -t rsa (而后一路Enter) 产生秘钥位于/root/.ssh/
//(2)执行命令cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys 产生受权文件
//(3)验证:ssh localhost
若是克隆的直接下一步:
判断本系统是否密码配置成功:
ssh cloud4 (ssh 主机名)
(主节点把密码传到从节点)
ssh-copy-id -i cloud41
转到从节点
ssh cloud41
(从节点把密码传到主节点)
ssh-copy-id -i cloud4
(-r递归)
scp -r /usr/local/jdk cloud41:/usr/local
scp -r /usr/local/hadoop cloud41:/usr/local
scp /etc/profile cloud41:/etc
source /etc/profile 配置文件生效
子节点能够是--------Slaves内容以下:
localhost(可要可不要,判断要不要主节点存放数据,通常dataNode放在从节点中)
cloud41,cloud42
hadoop namenode –format
start-all.sh
[root@cloud41 ~]# jps
4020 NameNode
4353 JobTracker
4466 TaskTracker
4508 Jps
4275 SecondaryNameNode
4154 DataNode
[root@cloud41 ~]# jps
3286 TaskTracker
3352 Jps
3194 DataNode
hadoop/bin
hadoop-daemon.sh start datanode
hadoop-daemon.sh start tasktracker
hadoop dfsadmin –refreshNodes
hadoop/bin
(可选)在主节点上执行负载均衡命令 start-balance.sh
hadoop fs -setrep 2 /hello
hadoop dfsadmin -safemode leave|enter|get (leave表示关闭安全模式)