大数据教程(11.3)hadoop2.9.1集群HA高可用搭建

           前面介绍了namenode的safemode模式,本篇博客博主将为小伙伴们深度解析hadoop集群HA高可用搭建的全过程。java

    1、集群规划node

           主机名                           IP                      安装的软件                                运行的进程
           centos-aaron-ha-01    192.168.29.149    jdk、hadoop                            NameNode、DFSZKFailoverController(zkfc)
           centos-aaron-ha-02    192.168.29.150    jdk、hadoop                            NameNode、DFSZKFailoverController(zkfc)
           centos-aaron-ha-03    192.168.29.151    jdk、hadoop                            ResourceManager 
           centos-aaron-ha-04    192.168.29.152    jdk、hadoop                            ResourceManager
           centos-aaron-ha-05    192.168.29.153    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
           centos-aaron-ha-06    192.168.29.154    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMain
           centos-aaron-ha-07    192.168.29.155    jdk、hadoop、zookeeper        DataNode、NodeManager、JournalNode、QuorumPeerMainlinux

    2、服务器基础环境准备(主机名、ip、域名映射、防火墙关闭、ssh免密登陆、jdk)web

           (1)克隆7台centos6.9 mini版系统shell

           (2)基础网络配置(主机名、ip、网卡、域名映射)apache

               详细操做请查看大数据教程(2.1):VMware虚拟机克隆(网络配置问题)bootstrap

           (3)防火墙关闭            centos

#关闭selinux
sudo vi /etc/sysconfig/selinux
修改enforcing为disabled
#查看防火墙状态
sudo service iptables status
#关闭防火墙
sudo service iptables stop
#永久关闭防火墙
sudo chkconfig iptables off

           (4)hadoop帐户配置(hadoop/hadoop)、sudo权限浏览器

*****添加用户
useradd  hadoop  
要修改密码才能登录 
passwd hadoop  按提示输入密码便可


**为用户配置sudo权限
用root编辑 vi /etc/sudoers
在文件的以下位置,为hadoop添加一行便可
root    ALL=(ALL)       ALL     
hadoop  ALL=(ALL)       ALL

而后,hadoop用户就能够用sudo来执行系统级别的指令
[hadoop@shizhan ~]$ sudo useradd huangxiaoming

           (5)ssh免密登陆配置(centos-aaron-ha-01/centos-aaron-ha-03)服务器

               因为centos-aaron-ha-01用于启动hdfs系统,因此需配置到其它几台服务器的ssh免密登录;而centos-aaron-ha-03用于启动yarn系统,一样须要配置到其它几台服务器的ssh免密登录。

#注意:如下命令需在全部服务器上执行、不然后面的远程拷贝、ssh均可能不能正常使用
sudo rpm -qa|grep ssh 检查服务器上已经安装了的ssh相关软件
sudo yum list|grep ssh  检查yum仓库中可用的ssh相关的软件包
sudo yum -y install openssh-server 安装服务端
sudo yum -y install openssh-clinets 安装客户端 (sudo yum -y install openssh-clients.x86_64)

               a.域名映射配置并分发到全部服务器

vi /etc/hosts
#新增
192.168.29.149 centos-aaron-ha-01
192.168.29.150 centos-aaron-ha-02
192.168.29.151 centos-aaron-ha-03
192.168.29.152 centos-aaron-ha-04
192.168.29.153 centos-aaron-ha-05
192.168.29.154 centos-aaron-ha-06
192.168.29.155 centos-aaron-ha-07
#经过scp分发到其它几台服务器
sudo scp /etc/hosts  root@192.168.29.150:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.151:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.152:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.153:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.154:/etc/hosts
sudo scp /etc/hosts  root@192.168.29.155:/etc/hosts

               b.首先要配置centos-aaron-ha-01到centos-aaron-ha-0一、centos-aaron-ha-0二、centos-aaron-ha-0三、centos-aaron-ha-0四、centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07的免密码登录

#在centos-aaron-ha-01上生产一对钥匙
ssh-keygen -t rsa
#将公钥拷贝到其余节点,包括本身
ssh-copy-id hadoop@centos-aaron-ha-01
ssh-copy-id hadoop@centos-aaron-ha-02
ssh-copy-id hadoop@centos-aaron-ha-03
ssh-copy-id hadoop@centos-aaron-ha-04
ssh-copy-id hadoop@centos-aaron-ha-05
ssh-copy-id hadoop@centos-aaron-ha-06
ssh-copy-id hadoop@centos-aaron-ha-07

              c.配置centos-aaron-ha-03到centos-aaron-ha-0三、centos-aaron-ha-0四、centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07的免密码登录

#在centos-aaron-ha-03上生产一对钥匙
ssh-keygen -t rsa
#将公钥拷贝到其余节点
ssh-copy-id centos-aaron-ha-03				
ssh-copy-id centos-aaron-ha-04
ssh-copy-id centos-aaron-ha-05
ssh-copy-id centos-aaron-ha-06
ssh-copy-id centos-aaron-ha-07

              d.注意:两个namenode之间要配置ssh免密码登录,别忘了配置centos-aaron-ha-02到centos-aaron-ha-01的免登录

在centos-aaron-ha-02上生产一对钥匙
ssh-keygen -t rsa
ssh-copy-id centos-aaron-ha-01

           (6)jdk1.8配置(以centos-aaron-ha-01为主节点开始安装)          

#进入文件上传ssh
Alt+p 
lcd d:/
put jdk-8u191-linux-x64.tar.gz
sudo tar -zxvf jdk-8u191-linux-x64.tar.gz -C /usr/local
#编辑配置文件
sudo vi /etc/profile
#跳到最后
shift+G
#新增一行插入内容
o
#添加下面内容到最后
JAVA_HOME=/usr/local/jdk1.8.0_191/
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export PATH
export CLASSPATH
#保存(Esc->shift+:->wq!->回车)
shift+z+z
#分发到全部服务器
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.150:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.151:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.152:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.153:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.154:/usr/local/
sudo scp  -r /usr/local/jdk1.8.0_191 root@192.168.29.155:/usr/local/
sudo scp /etc/profile  root@192.168.29.150:/etc/profile
sudo scp /etc/profile  root@192.168.29.151:/etc/profile
sudo scp /etc/profile  root@192.168.29.152:/etc/profile
sudo scp /etc/profile  root@192.168.29.153:/etc/profile
sudo scp /etc/profile  root@192.168.29.154:/etc/profile
sudo scp /etc/profile  root@192.168.29.155:/etc/profile
#配置生效
source /etc/profile
#jdk环境校验
java -version

           (7)zookeeper安装(以centos-aaron-ha-05为主节点开始安装)    

#上传zookeeper
scp zookeeper-3.4.13.tar.gz hadoop@192.168.29.153:/home/hadoop
#解压zookeeper
mkdir /home/hadoop/apps/
tar -zxvf zookeeper-3.4.13.tar.gz -C apps/
#修改配置未见
cd /home/hadoop/apps/zookeeper-3.4.13/conf/
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
#新增如下内容,并删除以前的默认dataDir配置
dataDir=/home/hadoop/apps/zookeeper-3.4.13/data
dataLogDir=/home/hadoop/apps/zookeeper-3.4.13/log
server.1=192.168.29.153:2888:3888
server.2=192.168.29.154:2888:3888
server.3=192.168.29.155:2888:3888
#新增data、log目录并赋权
cd /home/hadoop/apps/zookeeper-3.4.13/
mkdir -m 755 data
mkdir -m 755 log
#在/opt/apps/zookeeper-3.4.13/data文件夹下新建myid文件,myid的文件内容为server.1后面的1(此处需参照当前服务器id进行配置)
cd data
vi myid
或者
echo "1" > myid
#将集群分发到
scp -r /home/hadoop/apps/zookeeper-3.4.13 hadoop@192.168.29.154:/home/hadoop/apps/
scp -r /home/hadoop/apps/zookeeper-3.4.13 hadoop@192.168.29.155:/home/hadoop/apps/
#修改其余机器的配置文件
192.168.29.154上:修改myid为:2
192.168.29.155上:修改myid为:3
#启动zookeeper
/home/hadoop/apps/zookeeper-3.4.13/bin/zkServer.sh start
#查看集群状态
jps(查看进程)
/home/hadoop/apps/zookeeper-3.4.13/bin/zkServer.sh status(查看集群状态,主从信息)
#关闭集群
/home/hadoop/apps/zookeeper-3.4.13/bin/zkServer.sh stop

    3、  hadoop HA集群搭建

             (1)上传centos6.9-hadoop-2.9.1.tar.gz

             (2)解压hadoop到/home/hadoop/apps/目录:tar -zxvf centos6.9-hadoop-2.9.1.tar.gz  -C /home/hadoop/apps/

             (3)修改hadoop-env.sh、yarn-env.sh中JAVA_HOME目录为真实目录

cd /home/hadoop/apps/hadoop-2.9.1/etc/hadoop
vi hadoop-env.sh
#修改如下这句后面的值为以下
export JAVA_HOME=/usr/local/jdk1.8.0_191

vi yarn-env.sh
#将export JAVA_HOME注释放开,而且修改这句后面的值为以下
export JAVA_HOME=/usr/local/jdk1.8.0_191

             (4)将hadoop添加到环境变量中   

#修改centos-aaron-ha-01的配置
vi /etc/profile
export HADOOP_HOME=/home/hadoop/apps/hadoop-2.9.1/
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

#分发配置
sudo scp /etc/profile  root@192.168.29.150:/etc/profile
sudo scp /etc/profile  root@192.168.29.151:/etc/profile
sudo scp /etc/profile  root@192.168.29.152:/etc/profile
sudo scp /etc/profile  root@192.168.29.153:/etc/profile
sudo scp /etc/profile  root@192.168.29.154:/etc/profile
sudo scp /etc/profile  root@192.168.29.155:/etc/profile
#配置生效
source /etc/profile

             (5)配置core-site.xml文件

<configuration>
<!-- 指定hdfs的nameservice为ns1 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://bi/</value>
</property>
<!-- 指定hadoop临时目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hdpdata</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>centos-aaron-ha-05:2181,centos-aaron-ha-06:2181,centos-aaron-ha-07:2181</value>
</property>
</configuration>

             (6)修改hdfs-site.xml

<configuration>
<!--指定hdfs的nameservice为bi,须要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>bi</value>
</property>
<!-- bi下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.bi</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通讯地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn1</name>
<value>centos-aaron-ha-01:9000</value>
</property>
<!-- nn1的http通讯地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn1</name>
<value>centos-aaron-ha-01:50070</value>
</property>
<!-- nn2的RPC通讯地址 -->
<property>
<name>dfs.namenode.rpc-address.bi.nn2</name>
<value>centos-aaron-ha-02:9000</value>
</property>
<!-- nn2的http通讯地址 -->
<property>
<name>dfs.namenode.http-address.bi.nn2</name>
<value>centos-aaron-ha-02:50070</value>
</property>
<!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://centos-aaron-ha-05:8485;centos-aaron-ha-06:8485;centos-aaron-ha-07:8485/bi</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/journaldata</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.bi</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每一个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时须要ssh免登录 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>

             (7)修改mapred-site.xml

<configuration>
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>/home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/*, /home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/lib/*</value>
</property>
</configuration>

             (8)修改yarn-site.xml

<configuration>
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>centos-aaron-ha-03</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>centos-aaron-ha-04</value>
</property>
<!-- 表示rm1,rm2的网页访问地址和端口,也即经过该地址和端口可访问做业状况 -->
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>centos-aaron-ha-03:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>centos-aaron-ha-04:8088</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>centos-aaron-ha-05:2181,centos-aaron-ha-06:2181,centos-aaron-ha-07:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

             (9)将配置好的hadoop分发到其它几台服务器

sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-02:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-03:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-04:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-05:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-06:/home/hadoop/apps
sudo scp -r /home/hadoop/apps/  hadoop@centos-aaron-ha-07:/home/hadoop/apps

             (10)修改slaves(slaves是指定子节点的位置,由于要在cetnos-aaron-ha-01上启动HDFS、在cetnos-aaron-ha-03启动yarn,因此cetnos-aaron-ha-01上的slaves文件指定的是datanode的位置,cetnos-aaron-ha-03上的slaves文件指定的是nodemanager的位置)

vi /home/hadoop/apps/hadoop-2.9.1/etc/hadoop/slaves 
#新增覆盖 
centos-aaron-ha-05
centos-aaron-ha-06
centos-aaron-ha-07

             (11)启动zookeeper集群(分别在centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上启动zk)

cd /home/hadoop/apps/zookeeper-3.4.13/bin/
./zkServer.sh start
#查看状态:一个leader,两个follower
./zkServer.sh status

             (12)启动journalnode(分别在centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上执行)

cd /home/hadoop/apps/hadoop-2.9.1/
sbin/hadoop-daemon.sh start journalnode
#运行jps命令检验,centos-aaron-ha-0五、centos-aaron-ha-0六、centos-aaron-ha-07上多了JournalNode进程

             (13)格式化HDFS

#在centos-aaron-ha-01上执行命令:
hdfs namenode -format
#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置生成个文件,这里我配置的是/home/hadoop/hdpdata/,而后将/home/hadoop/hdpdata/拷贝到centos-aaron-ha-02的/home/hadoop/hdpdata/下。
scp -r hdpdata/ centos-aaron-ha-02:/home/hadoop/
##也能够这样,建议hdfs namenode -bootstrapStandby 【注:此步骤需先启动centos-aaron-ha-01上的namenode: hadoop-daemon.sh start namenode】

             (14)格式化ZKFC(在centos-aaron-ha-01上执行一次便可)

hdfs zkfc -formatZK

             (15)启动HDFS(在centos-aaron-ha-01上执行)

sh start-dfs.sh

             (16)启动YARN(注意:是在centos-aaron-ha-03上执行start-yarn.sh,把namenode和resourcemanager分开是由于性能问题,由于他们都要占用大量资源,因此把他们分开了,他们分开了就要分别在不一样的机器上启动)

sh start-yarn.sh

             (17) 在 centos-aaron-ha-04上启动resourcemanager

yarn-daemon.sh start resourcemanager

             (18)到此,hadoop-2.9.1配置完毕,能够统计浏览器访问:

http://centos-aaron-ha-01:50070
NameNode 'hadoop01:9000' (active)
http://centos-aaron-ha-02:50070
NameNode 'hadoop02:9000' (standby)

            (19)验证HDFS HA

首先向hdfs上传一个文件
hadoop fs -put /etc/profile /profile
hadoop fs -ls /
而后再kill掉active的NameNode
kill -9 <pid of NN>
经过浏览器访问:http://centos-aaron-ha-01:50070
NameNode 'centos-aaron-ha-01:9000' (active)
这个时候centos-aaron-ha-02上的NameNode变成了active
在执行命令:
hadoop fs -ls /
-rw-r--r--   3 hadoop supergroup       2111 2019-01-06 14:07 /profile
刚才上传的文件依然存在!!!
手动启动那个挂掉的NameNode
hadoop-daemon.sh start namenode
经过浏览器访问:http://centos-aaron-ha-02:50070
NameNode 'centos-aaron-ha-02:9000' (standby)

            (20)验证YARN:运行一下hadoop提供的demo中的WordCount程序:

 hadoop jar /home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar wordcount hdfs://bi/profile /out

    4、运行效果

[hadoop@centos-aaron-ha-03 hadoop]$ hadoop jar /home/hadoop/apps/hadoop-2.9.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar wordcount hdfs://bi/bxx /out2
19/01/08 18:43:16 INFO input.FileInputFormat: Total input files to process : 1
19/01/08 18:43:17 INFO mapreduce.JobSubmitter: number of splits:1
19/01/08 18:43:17 INFO Configuration.deprecation: yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
19/01/08 18:43:17 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
19/01/08 18:43:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1546944151190_0001
19/01/08 18:43:17 INFO impl.YarnClientImpl: Submitted application application_1546944151190_0001
19/01/08 18:43:17 INFO mapreduce.Job: The url to track the job: http://centos-aaron-ha-03:8088/proxy/application_1546944151190_0001/
19/01/08 18:43:17 INFO mapreduce.Job: Running job: job_1546944151190_0001
19/01/08 18:43:26 INFO mapreduce.Job: Job job_1546944151190_0001 running in uber mode : false
19/01/08 18:43:26 INFO mapreduce.Job:  map 0% reduce 0%
19/01/08 18:43:33 INFO mapreduce.Job:  map 100% reduce 0%
19/01/08 18:43:39 INFO mapreduce.Job:  map 100% reduce 100%
19/01/08 18:43:39 INFO mapreduce.Job: Job job_1546944151190_0001 completed successfully
19/01/08 18:43:39 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=98
                FILE: Number of bytes written=401749
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=122
                HDFS: Number of bytes written=60
                HDFS: Number of read operations=6
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=1
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=4081
                Total time spent by all reduces in occupied slots (ms)=3394
                Total time spent by all map tasks (ms)=4081
                Total time spent by all reduce tasks (ms)=3394
                Total vcore-milliseconds taken by all map tasks=4081
                Total vcore-milliseconds taken by all reduce tasks=3394
                Total megabyte-milliseconds taken by all map tasks=4178944
                Total megabyte-milliseconds taken by all reduce tasks=3475456
        Map-Reduce Framework
                Map input records=5
                Map output records=8
                Map output bytes=76
                Map output materialized bytes=98
                Input split bytes=78
                Combine input records=8
                Combine output records=8
                Reduce input groups=8
                Reduce shuffle bytes=98
                Reduce input records=8
                Reduce output records=8
                Spilled Records=16
                Shuffled Maps =1
                Failed Shuffles=0
                Merged Map outputs=1
                GC time elapsed (ms)=176
                CPU time spent (ms)=1030
                Physical memory (bytes) snapshot=362827776
                Virtual memory (bytes) snapshot=4141035520
                Total committed heap usage (bytes)=139497472
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=44
        File Output Format Counters 
                Bytes Written=60
[hadoop@centos-aaron-ha-03 hadoop]$
[hadoop@centos-aaron-ha-03 hadoop]$ hdfs dfs -ls /out2
Found 2 items
-rw-r--r--   3 hadoop supergroup          0 2019-01-08 18:43 /out2/_SUCCESS
-rw-r--r--   3 hadoop supergroup         60 2019-01-08 18:43 /out2/part-r-00000
[hadoop@centos-aaron-ha-03 hadoop]$ hdfs dfs -cat /out2/part-r-00000
ddfsZZ  1
df      1
dsfsd   1
hello   1
sdfdsf  1
sdfsd   1
sdss    1
xxx     1
[hadoop@centos-aaron-ha-03 hadoop]$

yarn的集群状况查看:

[hadoop@centos-aaron-ha-03 ~]$ yarn rmadmin -getServiceState rm1
 active
[hadoop@centos-aaron-ha-04 ~]$ yarn rmadmin -getServiceState rm2
 standby

hdfs的集群状况查看

    5、最后总结

           本次搭建集群出现了一些问题,例如,集群搭建后,其它一切正常;但没法执行mapreduce程序。该问题的解决须要根据mrapplication容器日志来查看,博主这是由于mapred-site.xml里面的hadoop的class路径配置和yarn-site.xml里面的webapp端口配置致使的。小伙伴们遇到问题须要根据yarn的界面日志来定位。

           最后寄语,以上是博主本次文章的所有内容,若是你们以为博主的文章还不错,请点赞;若是您对博主其它服务器大数据技术或者博主本人感兴趣,请关注博主博客,而且欢迎随时跟博主沟通交流。

相关文章
相关标签/搜索