sbin/hadoop-daemon.sh start namenode sbin/hadoop-daemon.sh start datanode sbin/yarn-daemon.sh start resourcemanager sbin/yarn-daemon.sh start nodemanager shell脚本 xxx.sh ls mkdir hadoop-start.sh sbin/hadoop-daemon.sh start namenode sbin/hadoop-daemon.sh start datanode sbin/yarn-daemon.sh start resourcemanager sbin/yarn-daemon.sh start nodemanager chmod 744 hadoop-start.sh
1. 相对路径
./hadoop-start.sh
2. 绝对路径
/opt/install/hadoop-2.5.2/hadoop-stop.sh
HDFS配置集群的原理分析java
ssh免密登录node
经过工具生成公私钥对linux
ssh-keygen -t rsa
公钥发送远程主机shell
ssh-copy-id 用户@ip
修改slave文件安全
vi /opt/install/hadoop2.5.2/etc/hadoop/slaves
slavesip
HDFS的集群搭建网络
ssh免密登录app
ssh-keygen -t rsa
ssh-copy-id 用户@ip
清除mac地址的影响ssh
rm -rf /etc/udev/rule.d/70-persistence.net.rules
设置网络分布式
1. ip地址设置 主机名 映射 关闭防火墙 关闭selinux
安装hadoop,jdk工具
1. 安装jdk 2. hadoop解压缩 3. 配置文件 hadoop-env.sh core-site.xml hdfs-site.xml yarn-site.xml mapred-site.xml slaves 一致 4. 格式化 NameNode所在的节点 格式化 [清空原有 data/tmp 内容] bin/hdfs namenode -format 5. 启动相关服务 sbin/start-dfs.sh 出现以下则成功:(从节点链接不成功能够先手动ssh连一下,确保能够无密码无验证才可进行如下) [root@hadoop hadoop-2.5.2]# sbin/start-dfs.sh 19/01/23 04:09:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop] hadoop: starting namenode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-namenode-hadoop.out hadoop2: starting datanode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-datanode-hadoop2.out hadoop: starting datanode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-datanode-hadoop.out hadoop1: starting datanode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-datanode-hadoop1.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /opt/install/hadoop-2.5.2/logs/hadoop-root-secondarynamenode-hadoop.out 19/01/23 04:10:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 6.运行jps [root@hadoop hadoop-2.5.2]# jps 3034 DataNode 3178 SecondaryNameNode 3311 Jps 2946 NameNode 2824 GetConf 7.在从节点运行jps,出现以下则正常 [root@hadoop1 etc]# jps 1782 Jps 1715 DataNode 访问hadoop:50070查看datanode:

NameNode持久化[了解]
什么是NameNode的持久化
FSImage和EditsLog文件默认存储的位置
#默认存储位置: /opt/install/hadoop-2.5.2/data/tmp/dfs/name
hadoop.tmp.dir=/opt/install/hadoop-2.5.2/data/tmp
dfs.namenode.name.dir=file://${hadoop.tmp.dir}/dfs/name
dfs.namenode.edits.dir = ${dfs.namenode.name.dir}
自定义FSImage和EditsLog的存储位置?
hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>/xxx/xxx</value>
</property>
<property>
<name>dfs.namenode.edits.dir</name>
<value>/xxx/xxx<</value>
</property>
安全模式 safemode
每一次重新启动NameNode时,都会进行EditsLog与FSImage的汇总,为了不这个过程当中,用户写操做会对系统形成影响,HDFS设置了安全模式(safemode),在安全模式中,不容许用户作写操做.完成合并后,安全模式会自动退出
手工干预安全模式
bin/hdfs dfsadmin -safemode enter|leave|get
SecondaryNameNode
按期合并FSImage和EditsLog
能够在NameNode进程宕机,FSImage和EditsLog硬盘损坏的状况下,部分还原NameNode数据
SecondaryNameNode获取的FSImage和EditsLog 存储位置 /opt/install/hadoop2.5.2/data/tmp/dfs/namesecondary #secondarynamenode还原namenode数据的方式 #rm -rf /opt/install/hadoop2.5.2/data/tmp/dfs/namesecondary/in_use.lock 1. 指定namenode持久化中FSImage 和 EditsLog的新位置 hdfs-site.xml <property> <name>dfs.namenode.name.dir</name> <value>file:///opt/install/nn/fs</value> </property> <property> <name>dfs.namenode.edits.dir</name> <value>file:///opt/install/nn/edits</value> </property> 2. kill namenode 目的为了演示 namenode 当机 日志查看/logs/hadoop-root-namenode-hadoop.log tail -100 查看最新的100行 3. 经过SecondaryNameNode恢复NameNode sbin/hadoop-daemon.sh start namenode -importCheckpoint 若是namenode没启动,查看查看hadoop2.5/data/tmp/dfs/namesecondary目录是否被锁,若是锁掉则删掉该目录下的in_use.lock