若是你看了上篇,我相信应该已经不须要我来多作介绍了,不过仍是简单说下吧!!!由于SecondaryNameNode负责合并NameNode中的Fsimage和Edit文件,因此它也保存了上次合并的NameNode中Fsimage和Edit文件!而后因此咱们能够把它拷贝到NameNode上,而后又因此最后一次没有合并的数据仍是会丢失。node
1 kill -9 NameNode进程
2 删除NameNode存储的数据(/opt/module/hadoop-2.7.2/data/tmp/dfs/name)web
[lsl@hadoop102 hadoop-2.7.2]$ rm -rf /opt/module/hadoop-2.7.2/data/tmp/dfs/name/*
3 拷贝启动SecondaryNameNode的DataNode中数据到原NameNode存储数据目录svg
[lsl@hadoop102 dfs]$ scp -r lsl@hadoop104:/opt/module/hadoop-2.7.2/data/tmp/dfs/namesecondary/* ./name/
4.从新启动NameNodeoop
[lsl@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
1.修改hdfs-site.xmlspa
<property> <name>dfs.namenode.checkpoint.period</name> <value>120</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/opt/module/hadoop-2.7.2/data/tmp/dfs/name</value> </property>
2.kill -9 NameNode进程
3.删除NameNode存储的数据(/opt/module/hadoop-2.7.2/data/tmp/dfs/name).net
[lsl@hadoop102 hadoop-2.7.2]$ rm -rf /opt/module/hadoop-2.7.2/data/tmp/dfs/name/*
4 若是SecondaryNameNode不和NameNode在一个主机节点上,须要将SecondaryNameNode存储数据的目录拷贝到NameNode存储数据的平级目录,并删除in_use.lock文件code
[lsl@hadoop102 dfs]$ scp -r lsl@hadoop104:/opt/module/hadoop-2.7.2/data/tmp/dfs/namesecondary ./ [lsl@hadoop102 namesecondary]$ rm -rf in_use.lock [lsl@hadoop102 dfs]$ pwd /opt/module/hadoop-2.7.2/data/tmp/dfs [lsl@hadoop102 dfs]$ ls data name namesecondary
**5.**导入检查点数据(等待一会ctrl+c结束掉)xml
[lsl@hadoop102 hadoop-2.7.2]$ bin/hdfs namenode -importCheckpoint
6 启动NameNodetoken
[lsl@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
版权声明:本博客为记录本人自学感悟,转载需注明出处!
https://me.csdn.net/qq_39657909进程