【Hadoop大数据分析与挖掘实战】(二)----------P22~23

【Hadoop大数据分析与挖掘实战】(一)html

5.配置ssh免登录vim

  1)启动三台机器,分别修改机器名为master、slave一、slave2,重启系统。
dom

[root@localhost ~]$ vim /etc/sysconfig/network #修改内容以下 NETWORKING = yes NETWORKING_IPV6 = no HOSTNAME = master #HOSTNAME为主机名,分别改一下就好。

  2)修改master上的/etc/hosts。ssh

#添加内容以下
192.168.222.131 master
192.168.222.132 slave1
192.168.222.133 slave2

  3)将hosts文件复制到slave1和slave2。ide

[root@master ~]# scp /etc/hosts root@slave1:/etc
[root@master ~]# scp /etc/hosts root@slave2:/etc

  4)在master机器上使用hadoop用户登陆(确保接下来的操做都是经过hadoop用户执行)。执行¥ssh-keygen -t rsa命令产生公钥。oop

[hadoop@master ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /tmp/id_rsa.
Your public key has been saved in /tmp/id_rsa.pub.
The key fingerprint is:
fa:2c:f7:98:9f:13:a1:7d:86:04:4c:54:5b:76:3e:01 hadoop@master
The key's randomart image is:
+--[ RSA 2048]----+
|       +o.. E.o  |
|        o  + o . |
|         ..   o  |
|          o    . |
|        S+ o     |
|       .. + o    |
|      .    +     |
|      .o.o..     |
|       o=o+.     |
+-----------------+

  5)将公钥复制到slave1和slave2.大数据

[hadoop@master tmp]$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave1
#输入hadoop@slave1的密码
[hadoop@master tmp]$ ssh-copy-id -i ~/.ssh/id_rsa.pub slave2
#输入hadoop@slave2的密码

  6)再次登陆,已经能够不须要密码能够登陆slave1,slave2.spa

[hadoop@master tmp]$ ssh slave1
Last login: Mon Jan 23 14:37:40 2017 from 192.168.0.112
[hadoop@slave1 ~]$ 

 

再后面是第六项安装Hadoop,配置文件我再研究研究。code

相关文章
相关标签/搜索