1,配置机器别名,ssh免密访问,关闭防火墙java
10.213.***.70 master1.hadoop.yspay
10.213.***.71 slave1.hadoop.yspay
10.213.***.72 slave2.hadoop.yspayshell
2,下载hbase 1.2.6版本,上传到指定机器/usr/local/,解压包
cd /usr/local/
tar -zxvf hbase-1.2.6-bin.tar
ls /usr/local/hbase-1.2.6ssh
3,hdfs地址和zookeeper地址配置
cd /usr/local/hbase-1.2.6/conf
vi hbase-site.xmloop
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://NN1/hbase</value> #和hadoop配置文件hdfs-site.xml中的dfs.nameservices参数保持一致
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master.port</name>
<value>16000</value> #默认Master的端口
</property>
<property>
<name>hbase.zookeeper.quorum</name> #zookeeper的集群地址
<value>master1.hadoop.yspay,slave1.hadoop.yspay,slave2.hadoop.yspay</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/var/hbase/tmp</value>
</property>
</configuration>ui
4,配置slave节点
cd /usr/local/hbase-1.2.6/conf
vi regionservers
添加做为region的ip地址或者hostname别名spa
5,配置环境变量
# The java implementation to use. Java 1.7+ required.
export JAVA_HOME=/usr/jdk64/jdk1.8.0_112/.net
# Extra Java CLASSPATH elements. Optional. 这里指定Hadoop的配置文件所在文件夹
export HBASE_CLASSPATH=/etc/hadoop/confserver
# Tell HBase whether it should manage it's own instance of Zookeeper or not.外部zookeeper
export HBASE_MANAGES_ZK=falsexml
#以hdfs用户访问hdfs,否则没有权限
export HADOOP_USER_NAME=hdfs进程
6,拷贝hbase1.2.6整个目录到其余集群机器
scp /usr/local/hbase-1.2.6 root@ip:/usr/local/
7,在master节点启动整个集群
启动全部节点,包括master和region: ./usr/local/hbase-1.2.6/bin/start-hbase.sh
也能够单独启动某个组件:
单独启动一个HMaster进程:
bin/hbase-daemon.sh start master
单独中止一个HMaster进程:
bin/hbase-daemon.sh stop master
单独启动一个HRegionServer进程:
bin/hbase-daemon.sh start regionserver
单独中止一个HRegionServer进程:
bin/hbase-daemon.sh stop regionserver
8,查看进程
jps
9,进hbase shell控制台
./hbase shell
10,在其余节点启动backup master ./usr/local/hbase-1.2.6/bin/hbase-daemon.sh start master