HBase 的安装有两种方式:单机安装和分布式安装。HBase的单机安装了解便可,你们重点掌握HBase 分布式集群的安装。下面咱们分别进行介绍。html
HBase 须要运行在 Hadoop 基础之上,所以安装HBase 的前提是必须安装 Hadoop 环境。Hadoop 环境的安装能够参考前面课程的内容。下载与Hadoop2.2.0或者 Hadoop2.6.0相匹配的hbase-0.98.11-hadoop2-bin.tar.gz 软件包(点击此处下载)。java
HBase 的安装步骤以下所示:node
步骤一: 下载解压HBasejquery
将hbase-0.98.11-hadoop2-bin.tar.gz 安装包解压到指定的目录(这里是在/opt/modules),而后重命名为hbase,最后将hbase操做权限分配给hadoop用户(运行hadoop的帐户)web
[hadoop@master modules]$ sudo rz (使用root用户登陆就不用sudo,如下同理)
[hadoop@master modules]$ sudo tar -zxvf hbase-0.98.13-hadoop2-bin.tar.gz [hadoop@master modules]$ sudo mv hbase-0.98.11-hadoop2 hbase [hadoop@master modules]$ ls hadoop-2.6.0 hbase hive1.0.0 jdk jdk1.7.0_79 jdk1.8.0_60 scala-2.11.8 spark-2.2.0-bin-hadoop2.6 zookeeper-3.4.5-cdh5.10.0 [hadoop@master modules]$ sudo chown -R hadoop:hadoop hbase [hadoop@master modules]$ ll total 32 drwxr-xr-x 12 hadoop hadoop 4096 Apr 11 00:00 hadoop-2.6.0 drwxrwxr-x 8 hadoop hadoop 4096 May 29 00:22 hbase drwxr-xr-x 11 hadoop hadoop 4096 May 24 12:34 hive1.0.0 lrwxrwxrwx 1 hadoop hadoop 12 Apr 9 05:59 jdk -> jdk1.8.0_60/ drwxr-xr-x 8 hadoop hadoop 4096 Apr 11 2015 jdk1.7.0_79 drwxr-xr-x 8 hadoop hadoop 4096 Aug 5 2015 jdk1.8.0_60 drwxrwxr-x 6 hadoop hadoop 4096 Mar 4 2016 scala-2.11.8 drwxr-xr-x 15 hadoop hadoop 4096 Apr 9 06:27 spark-2.2.0-bin-hadoop2.6 drwxr-xr-x 14 hadoop hadoop 4096 Apr 9 00:00 zookeeper-3.4.5-cdh5.10.0
步骤二:配置HBase环境变量apache
打开/etc/profile 文件,配置 HBase 的环境变量。session
[hadoop@master modules]$ sudo vi /etc/profile HBASE_HOME=/usr/java/hbase PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$PATH
修改并保存/etc/profile文件后,使配置文件当即生效。架构
[hadoop@master modules]$ source /etc/profile
步骤三: 修改HBase配置文件app
修改 conf/hbase-env.sh 配置文件。分布式
1) 去掉 JAVA_HOME 前的 "#",并将其修改为本身安装的 Java 路径。
2) 去掉 HBASE_MANAGES_ZK 前的 "#",并将其值设置为 true(HBase 管理本身的 ZooKeeper,这样就不须要单独安装 ZooKeeper)。
[hadoop@master hbase]$ vi conf/hbase-env.sh export JAVA_HOME=/usr/java/jdk1.8.0_51 export HBASE_MANAGES_ZK=true
修改 conf/hbase-site.xml配置文件,添加以下内容。
[hadoop@master hbase]$ vi conf/hbase-site.xml <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://master:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>false</value> </property> <property> <name>hbase.Zookeeper.quorum</name> <value>master</value> </property> <property> <name>Zookeeper.session.timeout</name> <value>60000</value> </property> <property> <name>hbase.Zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hbase.tmp.dir</name> <value>/home/hadoop/data/hbase/tmp</value> </property> <property> <name>hbase.client.keyvalue.maxsize</name> <value>10485760</value> </property> </configuration>
hbase.rootdir 属性的值须要与 Hadoop 目录下这个conf/core-site.xml 文件中的 fs.default.name 属性值对应。
fs.default.name 设置为hdfs://master:9000/ hbase.rootdir 设置为hdfs://master:9000/hbase hbase.ZooKeeper.quorum 设置为 master hbase.tmp.dir 设置为以前建立的 tmp 目录:/home/hadoop/data/hbase/tmp
步骤四:启动Hbase
一、首先启动Hadoop伪分布集群
[hadoop@master hadoop]$ sbin/start-all.sh [hadoop@master hadoop]$ jps 2995 Jps 2134 NameNode 2234 DataNode 2412 SecondaryNameNode 2573 ResourceManager 2671 NodeManager
二、启动HBase
[hadoop@master hbase]$ bin/start-hbase.sh [hadoop@master hbase]$ jps 3426 HRegionServer 3474 Jps 2134 NameNode 2234 DataNode 3228 HQuorumPeer 2412 SecondaryNameNode 3293 HMaster 2573 ResourceManager 2671 NodeManager
到这里 HBase 单机版已经安装成功。
在安装HBase 分布式集群以前,相信你们应该已经成功搭建Hadoop集群, 有了这个基础再安装HBase应该就比较简单了。接下来咱们一块儿搭建HBase集群。
步骤一: HBase集群架构
在咱们搭建HBase以前,首先要规划好HBase核心角色的节点分配。这里咱们是基于前面搭建的3节点的Hadoop集群(非高热备HA集群)进行HBase集群的搭建,咱们将master和slave1节点配置为Master,将slave2节点配置为RegionServer。同理,若是为5节点或者更多则能够将后面的均配置为RegionServer。
步骤二: HBase集群安装
一、配置conf/regionservers
[hadoop@master conf]$ sudo vi regionservers slave2
二、配置 Hbase master 的备份节点
[hadoop@master conf]$ sudo vi backup-masters slave1
三、配置conf/hbase-site.xml
[hadoop@master conf]$ sudo vi hbase-site.xml (为便于理解使用了中文注解,使用时请去掉) <configuration> <property> <name>hbase.zookeeper.quorum</name> <value>master,slave1,slave2</value><!—指定ZooKeeper集群位置> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/home/hadoop/data/zookeeper</value><!—Zookeeper写数据目录(与ZooKeeper集群上配置相一致)> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value><!—Zookeeper的端口号> </property> <property> <name>hbase.rootdir</name> <value>hdfs://cluster/hbase</value><!—RegionServers 共享目录> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value><!—开启分布式模式> </property> <property> <name>hbase.master</name> <value>hdfs://master:60000</value><!—指定Hbase的master的位置> </property> </configuration>
备注:配置这个hbase.rootdir属性的时候,须要将hdfs的core-site.xml和hdfs-site.xml两个配置文件copy到hbase的conf或者lib目录下,不然regionserver不能识别cluster逻辑名称。
四、配置hbase-env.sh
[hadoop@master conf]$ sudo vi hbase-env.sh #配置jdk安装路径 export JAVA_HOME=/home/hadoop/app/jdk1.8.0_51 #使用独立的Zookeeper集群 export HBASE_MANAGES_ZK=false
五、配置环境变量
[hadoop@master conf]# sudo vi /etc/profile HBASE_HOME=/opt/modules/hbase PATH=$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$PATH export HBASE_HOME
六、Hbase 安装包远程同步到其它节点
[hadoop@master modules]$ scp -r hbase slave1:/opt/modules/ [hadoop@master modules]$ scp -r hbase slave2:/opt/modules/
七、启动Hbase集群
需按照如下顺序来启动Hbase集群 1)启动Zookeeper
[hadoop@master conf]$ cd /opt/modules/zookeeper-3.4.5-cdh5.10.0/ [hadoop@master zookeeper-3.4.5-cdh5.10.0]$ bin/zkServer.sh start JMX enabled by default Using config: /opt/modules/zookeeper-3.4.5-cdh5.10.0/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [hadoop@master zookeeper-3.4.5-cdh5.10.0]$ jps 6113 Jps 6086 QuorumPeerMain [hadoop@slave1 zookeeper-3.4.5-cdh5.10.0]$ bin/zkServer.sh JMX enabled by default Using config: /opt/modules/zookeeper-3.4.5-cdh5.10.0/bin/../conf/zoo.cfg Usage: bin/zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd} [hadoop@slave1 zookeeper-3.4.5-cdh5.10.0]$ bin/zkServer.sh start JMX enabled by default Using config: /opt/modules/zookeeper-3.4.5-cdh5.10.0/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [hadoop@slave1 zookeeper-3.4.5-cdh5.10.0]$ jps 4728 Jps 4702 QuorumPeerMain [hadoop@slave2 modules]$ cd zookeeper-3.4.5-cdh5.10.0/ [hadoop@slave2 zookeeper-3.4.5-cdh5.10.0]$ bin/zkServer.sh start JMX enabled by default Using config: /opt/modules/zookeeper-3.4.5-cdh5.10.0/bin/../conf/zoo.cfg Starting zookeeper ... STARTED [hadoop@slave2 zookeeper-3.4.5-cdh5.10.0]$ jps 3370 Jps 3338 QuorumPeerMain
2)启动HDFS和YARN
[hadoop@master hadoop-2.6.0]$ sbin/start-dfs.sh 18/05/29 01:15:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master slave1] slave1: starting namenode, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-hadoop-namenode-slave1.out master: starting namenode, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-hadoop-namenode-master.out master: starting datanode, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-hadoop-datanode-master.out slave1: starting datanode, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-hadoop-datanode-slave1.out slave2: starting datanode, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-hadoop-datanode-slave2.out Starting journal nodes [master slave1 slave2] slave2: starting journalnode, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-hadoop-journalnode-slave2.out master: starting journalnode, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-hadoop-journalnode-master.out slave1: starting journalnode, logging to /opt/modules/hadoop-2.6.0/logs/hadoop-hadoop-journalnode-slave1.out 18/05/29 01:15:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [hadoop@master hadoop-2.6.0]$ sbin/start-yarn.sh starting yarn daemons starting resourcemanager, logging to /opt/modules/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-master.out slave2: starting nodemanager, logging to /opt/modules/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-slave2.out slave1: starting nodemanager, logging to /opt/modules/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-slave1.out master: starting nodemanager, logging to /opt/modules/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-master.out
3)启动Hbase
[hadoop@master hbase]$ bin/start-hbase.sh starting master, logging to /opt/modules/hbase/logs/hbase-hadoop-master-master.out slave2: starting regionserver, logging to /opt/modules/hbase/bin/../logs/hbase-hadoop-regionserver-slave2.out slave1: starting master, logging to /opt/modules/hbase/bin/../logs/hbase-hadoop-master-slave1.out
4)jps查看各节点进程的状态
[hadoop@master hbase]$ jps 8577 Jps 8193 JournalNode 7905 NameNode 8455 HMaster 8010 DataNode 7756 ResourceManager 7709 QuorumPeerMain [hadoop@slave1 hbase]$ jps 4850 NameNode 5016 JournalNode
4867 HMaster 5113 Jps 4762 ResourceManager 4925 DataNode 4702 QuorumPeerMain
[hadoop@slave2 hbase]$ jps 2341 HRegionServer 3510 JournalNode 3575 Jps 3338 QuorumPeerMain 3419 DataNode
八、经过web ui 查看HBase
http://master:60010/master-status http://slave1:60010/master-status
若是上述操做都ok,说明你的 HBase 集群安装成功。
以上就是博主为你们介绍的这一板块的主要内容,这都是博主本身的学习过程,但愿能给你们带来必定的指导做用,有用的还望你们点个支持,若是对你没用也望包涵,有错误烦请指出。若有期待可关注博主以第一时间获取更新哦,谢谢!