Hadoop集群搭建-02安装配置Zookeeperlinux
整个搭建hadoop集群的流程,包括vim
首先启动一台centos7的虚拟机,配置华为云yum源centos
[root@localhost ~]# cp -a /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak [root@localhost ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://repo.huaweicloud.com/repository/conf/CentOS-7-reg.repo [root@localhost ~]# yum clean all [root@localhost ~]# yum makecache [root@localhost ~]# yum update -y
而后安装一些乱七八糟的经常使用软件安全
[root@localhost ~]# yum install -y openssh-server vim gcc gcc-c++ glibc-headers bzip2-devel lzo-devel curl wget openssh-clients zlib-devel autoconf automake cmake libtool openssl-devel fuse-devel snappy-devel telnet unzip zip net-tools.x86_64 firewalld systemd
[root@localhost ~]# firewall-cmd --state [root@localhost ~]# systemctl stop firewalld.service [root@localhost ~]# systemctl disable firewalld.service [root@localhost ~]# systemctl is-enabled firewalld.service
[root@localhost ~]# /usr/sbin/sestatus -v 查看selinux的状态 [root@localhost ~]# vim /etc/selinux/config #修改状态为关闭 SELINUX=disabled [root@localhost ~]# reboot
下载地址http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.htmlbash
[root@localhost ~]# rpm -ivh jdk-8u144-linux-x64.rpm [root@localhost ~]# vim /etc/profile #修改环境变量,在文件末尾添加以下 export JAVA_HOME=/usr/java/jdk1.8.0_144 export JRE_HOME=$JAVA_HOME/jre export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
修改后只是对应这次用户这次会话生效,但愿永久全局生效,就要网络
[root@localhost ~]# source /etc/profile
[root@localhost ~]# yum install -y ntp-tools [root@localhost ~]# ntpdate ntp1.aliyun.com
[root@localhost ~]# useradd hadoop [root@localhost ~]# passwd hadoop
只容许wheel组内用户能够经过su - root命令登陆root用户,提升安全性
[root@localhost ~]# sed -i 's/#auth\t\trequired\tpam_wheel.so/auth\t\trequired\tpam_wheel.so/g' '/etc/pam.d/su' [root@localhost ~]# cp /etc/login.defs /etc/login.defs_bak [root@localhost ~]# echo "SU_WHEEL_ONLY yes" >> /etc/login.defs
添加hadoop用户进wheel组
[root@localhost ~]# gpasswd -a hadoop wheel [root@localhost ~]# cat /etc/group | grep wheel 查看hadoop有没有加入到wheel组
[root@localhost ~]# vim /etc/hosts 192.168.10.3 nn1.hadoop #这个是本机ip,主机名稍后一块儿配置 192.168.10.4 nn2.hadoop 192.168.10.5 s1.hadoop 192.168.10.6 s2.hadoop 192.168.10.7 s3.hadoop
完成后分别更改每一台的主机名并配置静态ip,要求和上面hosts文件内的一致并对应
[root@localhost ~]# hostnamectl set-hostname nn1.hadoop [root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="static" #这里修改成static IPADDR="192.168.10.3" #这里添加为你的每台虚拟机对应的ip NETMASK="255.255.255.0" #添加 GATEWAY="192.168.10.2" #添加为你虚拟机内的网关 DNS="192.168.10.2" #添加 NM_CONTROLLED="no" #添加,必然改完文件自动生效,可能直接网络就挂掉了 DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="49f05112-b80b-45c2-a3ec-d64c76ed2d9b" DEVICE="ens33" ONBOOT="yes"
[root@localhost ~]# systemctl stop NetworkManager.service 中止网络管理服务 [root@localhost ~]# systemctl disable NetworkManager.service 开机禁止自启动 [root@localhost ~]# systemctl restart network.service 重启网络服务
至此咱们应该有五台虚拟机,都按照以下ip和主机名配置对应好,五台都设置好了hosts文件
192.168.10.3 nn1.hadoop
192.168.10.4 nn2.hadoop
192.168.10.5 s1.hadoop
192.168.10.6 s2.hadoop
192.168.10.7 s3.hadoop而后防火墙、selinux都已经关闭,都正确安装jdk8并配置好环境变量,都正确新建了hadoop用户组并将其添加进wheel组。
上边的操做都是在root用户下进行的,如今切换到hadoop用户下进行之后的几乎全部操做。
[root@nn1 ~]# su - hadoop 注意这里的“-”,意味着用户和环境变量同时切换 [hadoop@nn1 ~]$ 这时候表明进入到了hadoop用户,还有#和$分别表明root用户和普通用户的身份区别
开始搭建ssh免密
思路是首先在每一台机器上分别建立各自的key,最后把这些key.pub汇总到~/.ssh/authorized_keys文件中再一块儿分发给全部机器,这时候就实现了五台机器的互相免密ssh访问。
[hadoop@nn1 ~]$ pwd 查看当前路径,确保在hadoop用户的home下 /home/hadoop [hadoop@nn1 ~]$ mkdir .ssh [hadoop@nn1 ~]$ chmod 700 ./.ssh [hadoop@nn1 ~]$ ll -a drwx------ 2 hadoop hadoop 132 7月 16 22:13 .ssh
[hadoop@nn1 ~]$ ssh-keygen -t rsa 建立key文件
这时候完成了nn1机器的设置(nn1做为咱们之后的主要操做机器)。按照上边的步骤把剩下的4台机器也弄好,而后分别把其余的4台机器的./ssh/id_rsa.pub重命名(防止重复和之外替换),再发送到nn1的./ssh/下
[hadoop@nn2 ~]$ scp ~/.ssh/id_rsa.pub hadoop@nn1.hadoop ~/.ssh/id_rsa.pubnn2
这时候nn1的~/.ssh/下应该有包括本身在内的5个pub文件(不重名),而后把他们都追加到下边的文件中
[hadoop@nn1 ~]$ touch authorized_keys [hadoop@nn1 ~]$ chmod 600 authorized_keys [hadoop@nn1 ~]$ cat ./ssh/id_rsa.pub >> authorized_keys [hadoop@nn1 ~]$ cat ./ssh/id_rsa.pubnn2 >> authorized_keys [hadoop@nn1 ~]$ cat ./ssh/id_rsa.pubs1 >> authorized_keys …………
而后最后把这个文件批量发送到其他4台机器上(忘了写批量脚本了,因此用scp命令依次发送吧)
至此5台机器的ssh免密互相访问配置结束,咱们能够分别测试(略)。
由于有5台机器啊,不少操做都要一块儿动,因此须要批量执行脚本。
#文件名:ips "nn1.hadoop" "nn2.hadoop" "s1.hadoop" "s2.hadoop" "s3.hadoop"
#!/bin/bash #文件名:ssh_all.sh RUN_HOME=$(cd "$(dirname "$0")"; echo "${PWD}") NOW_LIST=(`cat ${RUN_HOME}/ips`) SSH_USER="hadoop" for i in ${NOW_LIST[@]}; do f_cmd="ssh $SSH_USER@$i \"$*\"" echo $f_cmd if eval $f_cmd; then echo "OK" else echo "FAIL" fi done
#!/bin/bash #文件名:ssh_root.sh RUN_HOME=$(cd "$(dirname "$0")"; echo "${PWD}") NOW_LIST=(`cat ${RUN_HOME}/ips`) SSH_USER="hadoop" for i in ${NOW_LIST[@]}; do f_cmd="ssh $SSH_USER@i ~/exe.sh \"$*\"" echo $f_cmd if eval $f_cmd; then echo "OK" else echo "FAIL" fi done
#文件名exe.sh cmd=$* su - <<EOF $cmd EOF
#!/bin/bash RUN_HOME=$(cd "(dirname "$0")"; echo "${PWD}") NOW_LIST=(`cat ${UN_HOME}/ips`) SSH_USER="hadoop" for i in ${NOW_LIST[@]}; do f_cmd="scp $1 $SSH_USER@i:$2" echo $f_cmd if eval $f_cmd; then echo "ok" else echo "FAIL" fi done
前期准备工做结束,下一篇开始安装配置zookeeper