1、软件版本node
Oracle JDK七、Hadoop 2.7.三、HBase 1.2.三、Centos 7bash
2、注意点服务器
1. 操做系统的hostname必须配置,不要直接用IP,不然zookeeper会报错ssh
2. 关闭操做系统防火墙,或者配置port白名单oop
3. 必须同步时钟,不然HBase的Master进程会报错(https://my.oschina.net/nk2011/blog/784015)操作系统
3、集群共4台机器,hostname和ip对应以下.net
192.168.1.1 node1code
192.168.1.2 node2orm
192.168.1.3 node3server
192.168.1.4 node4
4、机器分配以下
Hadoop:
node1 namenode
node2 secondarynamenode
node3 datanode
node4 datanode
HBase:
node1 master zookeeper
node2 backup zookeeper regionServer
node3 zookeeper regionServer
5、开始搭建 Hadoop 的 HDFS
1. 关门防火墙
systemctl stop firewalld
2. 编辑 /etc/hostname,修改hostname
3. 编辑 /etc/hosts,添加IP与hostname的映射
4. 建立 hadoop 用户,并修改密码
useradd hadoop passwd hadoop
5. 安装JDK7,解压 Hadoop 2.7.3 到 /opt 目录
6. 修改 Hadoop 和 HBase 目录的拥有者
chown -R hadoop:hadoop /opt/hadoop-2.7.3 chown -R hadoop:hadoop /opt/hbase-1.2.3
7. 建立HDFS的数据目录
mkdir /opt/hadoop-2.7.3/hdfs # name node 数据目录 mkdir /opt/hadoop-2.7.3/hdfs/name # data node 数据目录 mkdir /opt/hadoop-2.7.3/hdfs/data # 修改拥有者 chmod -R hadoop:hadoop /opt/hadoop-2.7.3/hdfs
8. 设置SSH免密码登陆(https://my.oschina.net/nk2011/blog/778623)
# 切换到 hadoop 用户 su hadoop # 建立SSH的RSA密钥对 ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa # 复制公钥到全部服务器 ssh-copy-id hadoop@node1 ssh-copy-id hadoop@node2 ssh-copy-id hadoop@node3 ssh-copy-id hadoop@node4
9. 配置 Hadoop 集群
1)打开 ${HADOOP_HOME}/etc/hadoop/hadoop-env.sh,设置 JAVA_HOME
2)打开 ${HADOOP_HOME}/etc/hadoop/core-site.xml,添加如下配置
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://node1:8020</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> </configuration>
3)打开 ${HADOOP_HOME}/etc/hadoop/hdfs-site.xml,添加如下配置
<configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop-2.7.3/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop-2.7.3/hdfs/data</value> </property> <property> <name>dfs.blocksize</name> <value>268435456</value> </property> <property> <name>dfs.namenode.handle.count</name> <value>100</value> </property> #secondary name node配置 <property> <name>dfs.namenode.secondary.http-address</name> <value>node2:50090</value> </property> <property> <name>dfs.namenode.http-address</name> <value>node1:50070</value> </property> </configuration>
4)清空 ${HADOOP_HOME}/etc/hadoop/slaves,并加入如下内容
node2 node3 node4
5) 格式化HDFS的 name node
${HADOOP_HOME}/bin/hdfs namenode -format hadoop_cluster
10. 在node1,node2,node3,node4 上重复 1-9 步骤
11. Hadoop 的 HDFS 配置完毕,在 node1 上启动 Hadoop
[hadoop@node1]$ hadoop-2.7.3/sbin/start-dfs.sh Starting namenodes on [node1] node1: starting namenode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-namenode-node1.out node3: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-node3.out node2: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-node2.out node4: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-node4.out Starting secondary namenodes [node2] node2: starting secondarynamenode, logging to /opt/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-node2.out
6、搭建 HBase
1. 解压 HBase 1.2.3 到 /opt 目录
2. 打开 ${HBASE_HOME}/conf/hbase-env.sh,设置 JAVA_HOME
3. 打开 ${HBASE_HOME}/conf/hbase-env.sh,设置 HBASE_PID_DIR
4. 打开 ${HBASE_HOME}/conf/hbase-site.xml,添加如下内容
<configuration> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.rootdir</name> <value>hdfs://node1:8020/hbase</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>node1,node2,node3</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/opt/hbase-1.2.3/zookeeper</value> </property> </configuration>
5. 清空 ${HBASE_HOME}/conf/regionservers 文件,添加如下内容
node2 node3
6. 建立 ${HBASE_HOME}/conf/backup-masters 文件,添加如下内容
node2
7. 在 node1,node2,node3 重复 1-6 步骤
8. HBase配置完毕,在 node1 上启动 HBase(确保Hadoop已经启动)
[hadoop@node3]$ /opt/hbase-1.2.3/bin/start-hbase.sh node1: starting zookeeper, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-node1.out node2: starting zookeeper, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-node2.out node3: starting zookeeper, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-zookeeper-node3.out starting master, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-master-node1.out node2: starting regionserver, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-node2.out node3: starting regionserver, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-regionserver-node3.out node2: starting master, logging to /opt/hbase-1.2.3/bin/../logs/hbase-hadoop-master-node2.out