本文针对redhat或者centospython
对于测试集群,若是经过ambari安装Hadoop集群后,想从新再来一次的话,须要清理集群。web
对于安装了不少hadoop组件的话,这个工做很繁琐。接下来是我整理的清理过程。sql
1,经过ambari将集群中的所用组件都关闭,若是关闭不了,直接kill -9 XXXcentos
2,关闭ambari-server,ambari-agentapp
3,卸载安装的软件ide
以上命令可能不全,执行完一下命令后,再执行oop
查看是否还有没有卸载的,若是有,继续经过#yum remove XXX卸载post
4,删除postgresql的数据测试
postgresql软件卸载后,其数据还保留在硬盘中,须要把这部分数据删除掉,若是不删除掉,从新安装ambari-server后,有可能还应用之前的安装数据,而这些数据时错误数据,因此须要删除掉。spa
5,删除用户
ambari安装hadoop集群会建立一些用户,清除集群时有必要清除这些用户,并删除对应的文件夹。这样作能够避免集群运行时出现的文件访问权限错误的问题。
6,删除ambari遗留数据
7,删除其余hadoop组件遗留数据
rm -rf /etc/falcon
rm -rf /etc/knox
rm -rf /etc/hive-webhcat
rm -rf /etc/kafka
rm -rf /etc/slider
rm -rf /etc/storm-slider-client
rm -rf /etc/spark
rm -rf /var/run/spark
rm -rf /var/run/hadoop
rm -rf /var/run/hbase
rm -rf /var/run/zookeeper
rm -rf /var/run/flume
rm -rf /var/run/storm
rm -rf /var/run/webhcat
rm -rf /var/run/hadoop-yarn
rm -rf /var/run/hadoop-mapreduce
rm -rf /var/run/kafka
rm -rf /var/log/hadoop
rm -rf /var/log/hbase
rm -rf /var/log/flume
rm -rf /var/log/storm
rm -rf /var/log/hadoop-yarn
rm -rf /var/log/hadoop-mapreduce
rm -rf /var/log/knox
rm -rf /usr/lib/flume
rm -rf /usr/lib/storm
rm -rf /var/lib/hive
rm -rf /var/lib/oozie
rm -rf /var/lib/flume
rm -rf /var/lib/hadoop-hdfs
rm -rf /var/lib/knox
rm -rf /var/log/hive
rm -rf /var/log/oozie
rm -rf /var/log/zookeeper
rm -rf /var/log/falcon
rm -rf /var/log/webhcat
rm -rf /var/log/spark
rm -rf /var/tmp/oozie
rm -rf /tmp/ambari-qa
rm -rf /var/hadoop
rm -rf /hadoop/falcon
rm -rf /tmp/hadoop
rm -rf /tmp/hadoop-hdfs
rm -rf /usr/hdp
rm -rf /usr/hadoop
rm -rf /opt/hadoop
rm -rf /opt/hadoop2
rm -rf /tmp/hadoop
rm -rf /var/hadoop
rm -rf /hadoop
rm -rf /etc/ambari-metrics-collector
rm -rf /etc/ambari-metrics-monitor
rm -rf /var/run/ambari-metrics-collector
rm -rf /var/run/ambari-metrics-monitor
rm -rf /var/log/ambari-metrics-collector
rm -rf /var/log/ambari-metrics-monitor
rm -rf /var/lib/hadoop-yarn
rm -rf /var/lib/hadoop-mapreduce
8,清理yum数据源