[TOC] ResourceManager (RM)负责跟踪集群中的资源,以及调度应用程序(例如,MapReduce做业)。在Hadoop 2.4以前,集群中只有一个ResourceManager,当其中一个宕机时,将影响整个集群。高可用性特性增长了冗余的形式,即一个主动/备用的ResourceManager对,以即可以进行故障转移。node
YARN HA的架构以下图所示: 本例中,各节点的角色分配以下表所示:web
节点 | 角色 |
---|---|
centos01 | ResourceManager NodeManager |
centos02 | ResourceManager NodeManager |
centos03 | NodeManager |
下面将逐步讲解YARN HA的配置步骤。 #7.1 yarn-site.xm文件配置 (1)修改yarn-site.xm文件,加入如下内容:shell
<details> <summary><font color="blue">点击展开内容</font></summary>apache
<!--YARN HA配置--> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster1</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>centos01</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>centos02</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>centos01:8088</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>centos02:8088</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>centos01:2181,centos02:2181,centos03:2181</value> </property> <property><!--启用RM重启的功能,默认为false--> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property>
</details> 上述配置参数解析: yarn.resourcemanager.ha.enabled:开启RM HA功能。 yarn.resourcemanager.cluster-id:标识集群中的RM。若是设置该选项,须要确保全部的RMs在配置中都有本身的id。 yarn.resourcemanager.ha.rm-ids:RMs的逻辑id列表。能够自定义,此处设置为“rm1,rm2”。后面的配置将引用该id。 yarn.resourcemanager.hostname.rm1:指定RM对应的主机名。另外,能够设置RM的每一个服务地址。 yarn.resourcemanager.webapp.address.rm1:指定RM的Web端访问地址。 yarn.resourcemanager.zk-address:指定集成的ZooKeeper的服务地址。 yarn.resourcemanager.recovery.enabled:启用RM重启的功能,默认为false。 yarn.resourcemanager.store.class:用于状态存储的类,默认为org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore,基于Hadoop文件系统的实现。还能够为org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore,该类为基于ZooKeeper的实现。此处指定该类。centos
(2)yarn-site.xm文件配置好后,须要将其发送到集群中其它节点。 (3)接着上一章启动好的HDFS,继续进行启动YARN。 分别在centos0一、centos02节点上执行如下命令,启动ResourceManager:浏览器
[hadoop@centos01 hadoop-2.7.1]$ sbin/yarn-daemon.sh start resourcemanager
分别在centos0一、centos0二、centos03节点上执行如下命令,启动nodemanager:架构
[hadoop@centos01 hadoop-2.7.1]$ sbin/yarn-daemon.sh start nodemanager
(4)YARN启动后,查看各节点Java进程:app
[hadoop@centos01 hadoop-2.7.1]$ jps 3360 QuorumPeerMain 4080 DFSZKFailoverController 4321 NodeManager 4834 Jps 3908 JournalNode 3702 DataNode 4541 ResourceManager 3582 NameNode [hadoop@centos02 hadoop-2.7.1]$ jps 4486 Jps 3815 DFSZKFailoverController 4071 NodeManager 4359 ResourceManager 3480 NameNode 3353 QuorumPeerMain 3657 JournalNode 3563 DataNode [hadoop@centos03 hadoop-2.7.1]$ jps 3496 JournalNode 4104 Jps 3836 NodeManager 3293 QuorumPeerMain 3390 DataNode
此时浏览器输入地址http://centos01:8088 访问活动状态的ResourceManager,查看YARN的启动状态。以下图所示。 若是访问备份ResourceManager地址:http://centos02:8088 发现自动跳转到了地址http://centos01:8088。这是由于此时活动状态的ResourceManager在centos01节点上。访问备份节点的ResourceManager会自动跳转到活动节点。 #7.2 测试YARN自动故障转移 在centos01节点上执行MapReduce默认的WordCount程序,当正在执行map阶段时,新开一个SSH Shell窗口,杀掉centos01的ResourceManager进程,观察程序执行过程。执行MapReduce默认的WordCount程序的命令以下:webapp
[hadoop@centos01 hadoop-2.7.1]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output
执行结果以下:oop
[hadoop@centos01 hadoop-2.7.1]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /input /output 18/03/16 10:48:22 INFO input.FileInputFormat: Total input paths to process : 1 18/03/16 10:48:22 INFO mapreduce.JobSubmitter: number of splits:1 18/03/16 10:48:23 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1521168402181_0001 18/03/16 10:48:23 INFO impl.YarnClientImpl: Submitted application application_1521168402181_0001 18/03/16 10:48:23 INFO mapreduce.Job: The url to track the job: http://centos01:8088/proxy/application_1521168402181_0001/ 18/03/16 10:48:23 INFO mapreduce.Job: Running job: job_1521168402181_0001 18/03/16 10:48:56 INFO mapreduce.Job: Job job_1521168402181_0001 running in uber mode : false 18/03/16 10:48:57 INFO mapreduce.Job: map 0% reduce 0% 18/03/16 10:50:21 INFO mapreduce.Job: map 100% reduce 0% 18/03/16 10:50:32 INFO mapreduce.Job: map 100% reduce 100% 18/03/16 10:50:36 INFO mapreduce.Job: Job job_1521168402181_0001 completed successfully 18/03/16 10:50:37 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=1321 FILE: Number of bytes written=239335 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=1094 HDFS: Number of bytes written=971 HDFS: Number of read operations=6 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=14130 Total time spent by all reduces in occupied slots (ms)=7851 Total time spent by all map tasks (ms)=14130 Total time spent by all reduce tasks (ms)=7851 Total vcore-seconds taken by all map tasks=14130 Total vcore-seconds taken by all reduce tasks=7851 Total megabyte-seconds taken by all map tasks=14469120 Total megabyte-seconds taken by all reduce tasks=8039424 Map-Reduce Framework Map input records=29 Map output records=109 Map output bytes=1368 Map output materialized bytes=1321 Input split bytes=101 Combine input records=109 Combine output records=86 Reduce input groups=86 Reduce shuffle bytes=1321 Reduce input records=86 Reduce output records=86 Spilled Records=172 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=188 CPU time spent (ms)=1560 Physical memory (bytes) snapshot=278478848 Virtual memory (bytes) snapshot=4195344384 Total committed heap usage (bytes)=140480512 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=993 File Output Format Counters Bytes Written=971
从上述结果中能够看出,虽然ResourceManager进程被杀掉了,可是YARN仍然可以流畅的执行,说明自动故障转移功能生效了,ResourceManager遇到故障后,自动切换到了centos02节点上继续执行。此时浏览器访问备用ResourceManager的Web端地址http://centos02:8088发现能够成功访问了。显示任务成功执行完毕。 到此,YARN HA集群搭建完毕。