HDFS HA和Federation

cluster1cluster2组成hdfs联盟,cluster1有namenodeA和namenodeB,cluster2有namenodeC和namenodeD。node

安装配置好后启动顺序应该为:bootstrap

1.在全部zookeeper集群节点上启动zookeeper:zkServer.sh start oop

2.在cluster1的namenodeA上格式化zookeeper集群:hdfs zkfc -formatZKspa

3.在cluster2的namenodeC上格式化zookeeper集群:hdfs zkfc -formatZK.net

4.在全部journal集群节点上启动journal:hadoop-daemon.sh start journalnodeorm

5.cluster1的namenodeA上格式化集群:hdfs namenode -format -clusterId cluster1blog

6.启动cluster1的namenodeAhadoop-daemon.sh start namenode教程

7.在cluster1的namenodeB上copynamenodeA的数据的数据:hdfs namenode -bootstrapStandbyhadoop

8.启动cluster1的namenodeBhadoop-daemon.sh start namenodeget

9.在cluster2的namenodeC上格式化集群:hdfs namenode -format -clusterId cluster2

10.启动cluster2的namenodeC:hadoop-daemon.sh start namenode

11.在cluster2的namenodeD上copynamenodeC的数据的数据:hdfs namenode -bootstrapStandby

12.启动cluster2的namenodeD:hadoop-daemon.sh start namenode

13.启动全部namenode的zkfc:hadoop-daemon.sh start zkfc

14.启动全部的datanode:hadoop-daemon.sh start datanode

15.在resourcemanager上启动yarn:start-yarn.sh

16.验证HA,杀死active的namenode:kill -9

17.提交做业验证yarn验证。

参考:国内第一篇详细讲解hadoop2的automatic HA+Federation+Yarn配置的教程

相关文章
相关标签/搜索