一步一步安装hive

The Apache Hive ™ data warehouse software facilitates reading, writing, and managing large datasets residing in distributed storage using SQL. Structure can be projected onto data already in storage. A command line tool and JDBC driver are provided to connect users to Hive.html

--apache Hive可使用SQL在分布式系统来读取,写入和管理大数据集合。java

hive是基于hadoop 集群系统之上的对数据管理的一个框架,因此在搭建hive以前 须要先搭建hadoop和mysql。node

mysql主要是用于存储hadoop元数据信息,数据块存储在hadoop中。其实hive只是接受用户输入的sql语句,而后把sql语句映射成MR JOB,由Yarn 完成对任务的调度执行。从流程上讲,hive这种设计在效率上并不高,可是有一个好处是hadoop开发不须要写java代码,就像操做关系型数据库同样操做hadoop,hive提供一种类SQL的语言使用。mysql

为了简单方便,这里采用虚拟机搭建hadoop集群系统。linux

linux系统安装没有什么好说的,略过。web

节点规划:sql

  NN SNN DN MR YARN
hd1 Y     Y Y
hd2   Y Y    
hd3     Y    
hd4     Y    

这里介绍几个概念:数据库

namenode(NN):

主要功能:apache

  • 接受客户端读写服务;
  • 保存元数据信息(文件名,文件属主和权限;文件包含块信息;block存放那个datanode由datanode启动时上报给namenode) namenode的元数据在集群启动时会所有加载到内存。

元数据还会存储到磁盘的文件名fsimage,block的位置不会保存到fsimage,为了提升速度 block位置信息不会保存到fsimage,block位置会一直在内存中加载读取。 若是修改一条数据,hdfs并不会立刻修改fsimage,而是先记录到metadata操做日志,等到知足条件会写到fsimage中。咱们能够这么理解,fsimage是对元数据的按期全备,edits是对元数据的实时记录,是对源数据库的增量备份,全备加上增量备份就是元数据的完整备份。bash

SecondarynameNode(SNN):

它不是nn的备份,只是nn的一部分数据备份。固然也能够做为备份,它的主要功能是协助nn合并editslog,减小nn启动时间。fsimage在跟edits合并的时候会删除,这个时候会有大量的IO操做,这个时候snn会协助nn完整这个动做。合并完成以后在snn上会生成一个新的fsimage,而后推送给nn. 以上动做会周而复始的进行。

yarn(Yet Another Resource Negotiator):资源管理利器

yarn的引入,使得多个计算框架可运行在一个集群中。每一个应用程序对应一个applicationMaster。目前多个计算框架能够运行在yarn上。

基本功能:

  • yarn 负责资源管理和调度;
  • mrappmaster 负责任务切分,调度,监控和容错;
  • maptask/reduceTask 任务驱动;

每个mapreduce做业对应一个MRAppMaster:

  • MRAppMaster负责做业调度,yarn将资源分配给MRAppMaster,MRAppMaster将资源分配给内部任务。

MRAppMaster容错 :

  • 失败后由yarn从新启动;
  • 任务失败后,MRAppMaster从新申请资源

DataNode数据节点(DN):

DN在启动的时候会向NN汇报block信息; 经过主动向nn发送心跳保持一致(3s),若是NN 10分钟没有收到DN心跳信息,则认为其已经丢失,并拷贝其block到其余DN.

为了方便这里采用虚拟机拷贝方式,这里注意一点就是网卡信息问题,能够参考 https://my.oschina.net/u/3862440/blog/2250996。

hadoop 伪分布安装步骤

可参考http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/ClusterSetup.html

 

修改主机名:

[root@hd1 ~]# vi /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=hd2.localdomain

用户名,用户组规划:

groupadd -g 10010 hadoop 

useradd -u 10010 -g hadoop -d /home/hadoop hadoop

groupadd -g 10012 mysql 

useradd -u 10012 -g mysql -d /home/mysql mysql

安装jdk

tar -xvf jdk-8u11-linux-x64.tar -C /usr/

配置环境变量:

JAVA_HOME=/usr/java/jdk1.8.0_11
export JAVA_HOME
export JRE_HOME=/usr/java/jdk1.8.0_11
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

配置hosts本地解析文件:

[root@hd1 ~]# vi /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.83.11  hd1
192.168.83.12  hd2
192.168.83.13  hd3
192.168.83.14  hd4

配置互信:

ssh-keygen -t rsa 
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys

让hd1拥有hd2,hd3,hd4节点的公钥,其余几个节点也须要拥有hd1的公钥;这样hd1链接其余几个节点就不须要密码了,其余几个节点链接hd1也不须要密码。

软件准备:

hadoop-2.6.0-cdh5.7.0

hive-1.1.0-cdh5.7.0.tar

mysql-5.7.20-linux-glibc2.12-x86_64.tar

hadoop配置:

  • 配置 JAVA_HOME

    vi etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_11
  • vi etc/hadoop/core-site.xml,修改Hadoop核心配置文件core-site.xml,这里配置的是HDFS master(即namenode)的地址和端口号。
<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://hd1:9000</value>
         </property>
         <property>
                 <name>hadoop.tmp.dir</name>
                 <value>file:/usr/hadoop/hadoop-2.7.1/tmp</value>
                 <description>Abase for other temporary directories.</description>
         </property>
</configuration>
  • etc/hadoop/hdfs-site.xml,修改Hadoop中HDFS的配置,配置的副本方式默认为3和SNN地址和端口号。
<configuration>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>hd2:50090</value>
        </property>
        <property>
                 <name>dfs.replication</name>
                 <value>3</value>
         </property>
         <property>
                 <name>dfs.namenode.name.dir</name>
                 <value>file:/usr/hadoop/hadoop-2.7.1/tmp/dfs/name</value>
         </property>
         <property>
                 <name>dfs.datanode.data.dir</name>
                 <value>file:/usr/hadoop/hadoop-2.7.1/tmp/dfs/data</value>
          </property>
</configuration>
  • MR配置:Configure parameters as follows:etc/hadoop/mapred-site.xml,修改Hadoop中MapReduce的配置文件,配置的是JobTracker的地址和端口。
<configuration>
        <property>
               <name>mapreduce.framework.name</name>
               <value>yarn</value>
        </property>
        <property>
               <name>mapreduce.jobhistory.address</name>
               <value>hd1:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>hd1:19888</value>
        </property>
</configuration>
  • yarn配置:etc/hadoop/yarn-site.xml:
<configuration>
        <property>
             <name>yarn.resourcemanager.hostname</name>
             <value>hd1</value>
        </property>
        <property>
             <name>yarn.nodemanager.aux-services</name>
             <value>mapreduce_shuffle</value>
       </property>
</configuration>
  • DN配置:
[hadoop@hd1 hadoop]$ vi slaves 

hd2
hd3
hd4

把hd1 下面的hadoop-2.7.1 文件拷贝到hd2,hd3,hd4节点,

scp -r /home/hadoop/hadoop-2.7.1 hd2:/home/hadoop/
scp -r /home/hadoop/hadoop-2.7.1 hd3:/home/hadoop/
scp -r /home/hadoop/hadoop-2.7.1 hd4:/home/hadoop/

配置hadoop环境变量,为了方便后面的操做:

export HADOOP_INSTALL=/home/hadoop/hadoop-2.7.1
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin

分别拷贝到hd2,hd3,hd4节点:

scp ~/.bash_profile  hd2:~/
scp ~/.bash_profile  hd3:~/
scp ~/.bash_profile  hd4:~/

使hadoop用户环境变量配置文件生效 source ~/.bash_profile

启动:

咱们知道hdfs是一个分布式文件系统,是本身的一套文件系统,因此在启动的时候,须要格式化。hadoop并不能识别linux上的ext3,ext4文件系统。格式化:

hadoop namenode -format

java.net.UnknownHostException: hd1.localdomain: hd1.localdomain: unknown error
        at java.net.InetAddress.getLocalHost(InetAddress.java:1484)
        at org.apache.hadoop.net.DNS.resolveLocalHostname(DNS.java:264)
        at org.apache.hadoop.net.DNS.<clinit>(DNS.java:57)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:966)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:575)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:157)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.net.UnknownHostException: hd1.localdomain: unknown error
        at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
        at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:907)
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1302)
        at java.net.InetAddress.getLocalHost(InetAddress.java:1479)
        ... 8 more
18/10/23 22:49:36 WARN net.DNS: Unable to determine address of the host-falling back to "localhost" address
java.net.UnknownHostException: hd1.localdomain: hd1.localdomain: unknown error
        at java.net.InetAddress.getLocalHost(InetAddress.java:1484)
        at org.apache.hadoop.net.DNS.resolveLocalHostIPAddress(DNS.java:287)
        at org.apache.hadoop.net.DNS.<clinit>(DNS.java:58)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:966)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:575)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:157)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
Caused by: java.net.UnknownHostException: hd1.localdomain: unknown error
        at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
        at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:907)
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1302)
        at java.net.InetAddress.getLocalHost(InetAddress.java:1479)
        ... 8 more
18/10/23 22:49:36 INFO namenode.FSImage: Allocated new BlockPoolId: BP-520690254-127.0.0.1-1540306176095
18/10/23 22:49:36 INFO common.Storage: Storage directory /usr/hadoop/hadoop-2.7.1/tmp/dfs/name has been successfully formatted.
18/10/23 22:49:36 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/10/23 22:49:36 INFO util.ExitUtil: Exiting with status 0
18/10/23 22:49:36 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: hd1.localdomain: hd1.localdomain: unknown error
************************************************************

能够看到不能识别hd1.localdomain,因此这里把主机名修改成hd1,不带DNS后缀。

全部的节点都须要修改。

vi /etc/sysconfig/network 

NETWORKING=yes
HOSTNAME=hd1

从新格式化:

[hadoop@hd1 ~]$ hadoop namenode -format 
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
18/10/23 22:54:03 INFO util.GSet: capacity      = 2^15 = 32768 entries
Re-format filesystem in Storage Directory /usr/hadoop/hadoop-2.7.1/tmp/dfs/name ? (Y or N) Y
18/10/23 22:54:06 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1243395970-192.168.83.11-1540306446216
18/10/23 22:54:06 INFO common.Storage: Storage directory /usr/hadoop/hadoop-2.7.1/tmp/dfs/name has been successfully formatted.
18/10/23 22:54:06 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/10/23 22:54:06 INFO util.ExitUtil: Exiting with status 0
18/10/23 22:54:06 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hd1/192.168.83.11
************************************************************/

成功。

启动dfs:

[hadoop@hd1 ~]$ start-dfs.sh

[hadoop@hd1 hadoop]$ start-dfs.sh 
18/10/23 23:01:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hd1]
hd1: starting namenode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-namenode-hd1.out
hd4: starting datanode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hd4.out
hd3: starting datanode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hd3.out
hd2: starting datanode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-datanode-hd2.out
Starting secondary namenodes [hd2]
hd2: starting secondarynamenode, logging to /home/hadoop/hadoop-2.7.1/logs/hadoop-hadoop-secondarynamenode-hd2.out
18/10/23 23:01:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

以上能够看到,NN启动在第一个节点,SNN启动在第二个节点,DN启动在hd2,hd3,hd4三个节点,跟以前规划一致。

启动yarn:

[hadoop@hd1 ~]$ start-yarn.sh

[hadoop@hd1 hadoop]$ start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-resourcemanager-hd1.out
hd2: starting nodemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-hd2.out
hd4: starting nodemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-hd4.out
hd3: starting nodemanager, logging to /home/hadoop/hadoop-2.7.1/logs/yarn-hadoop-nodemanager-hd3.out

能够看到,resuurceManager启动在hd1节点上,NodeManager启动在hd2,hd3,hd4节点上。

yarn是一个资源管理,任务调度的框架,因此他分配资源,任何的hdfs的操做都须要想yarn申请资源,因此yarn须要在NN上有一个ResourceManager进程,接受用户任务的请求;因为yarn是资源管理因此他须要明白全部的节点的资源状态,因此DN节点须要有NodeManager进程。

[hadoop@hd1 ~]$ mr-jobhistory-daemon.sh start historyserver

[hadoop@hd1 hadoop]$ mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/hadoop/hadoop-2.7.1/logs/mapred-hadoop-historyserver-hd1.out

MR是一个计算框架,他是部署在yarn 之上的,因此这里启动yarn 以后,MR就自动启动了。

jps查看节点进程状态:

[hadoop@hd1 hadoop]$ jps
3681 NameNode
4259 JobHistoryServer
3957 ResourceManager
4362 Jps

[hadoop@hd2 ~]$ jps
2643 DataNode
2973 Jps
2735 SecondaryNameNode
2815 NodeManager

[hadoop@hd3 ~]$ jps
2216 DataNode
2472 Jps
2317 NodeManager

[hadoop@hd4 ~]$ jps
2368 NodeManager
2504 Jps
2265 DataNode

hdfs 测试:

[hadoop@hd1 ~]$ hdfs dfs -mkdir /hadoop/
18/10/23 23:26:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@hd1 ~]$ hdfs dfs -ls /
18/10/23 23:26:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2018-10-23 23:26 /hadoop
drwxrwx---   - hadoop supergroup          0 2018-10-23 23:09 /tmp

接下来部署hive:

部署mysql:

  • 解压
  • #### 初始化(至关于dbca建库)
[mysql@hd1 bin]$ ./mysqld --initialize-insecure --basedir=/home/mysql/mysql-5.7.20 --datadir=/home/mysql/mysql-5.7.20/data --user=mysql
2018-10-23T15:33:43.531824Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 5000)
2018-10-23T15:33:43.532051Z 0 [Warning] Changed limits: table_open_cache: 431 (requested 2000)
2018-10-23T15:33:43.532611Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-10-23T15:33:46.626624Z 0 [Warning] InnoDB: New log files created, LSN=45790
2018-10-23T15:33:47.139730Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
2018-10-23T15:33:47.498281Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: 08944547-d6d9-11e8-b3f4-000c297eaaf3.
2018-10-23T15:33:47.550419Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened.
2018-10-23T15:33:47.564974Z 1 [Warning] root@localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.

编写my.cnf配置文件

[mysqld]
basedir=/home/mysql/mysql-5.7.20
datadir=/home/mysql/mysql-5.7.20/data
socket=/tmp/mysql.sock
log_error=/home/mysql/mysql-5.7.20/mysql.err
user=mysql

[mysql]
socket=/tmp/mysql.sock

启动MySql:

[mysql@hd1 support-files]$ ./mysql.server start 
Starting MySQL..[  OK  ]

为了方便启动,能够把MySql的启动脚本拷贝到 /etc/init.d/下面,让随着OS一块儿启动。

[root@hd1 support-files]# cp mysql.server /etc/init.d/mysqld
[root@hd1 support-files]# /etc/init.d/mysqld restart
Shutting down MySQL..[  OK  ]
Starting MySQL.[  OK  ]

测试:

[mysql@hd1 ~]$ mysql 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.7.20 MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

设置mysql密码:

[mysql@hd1 bin]$ mysqladmin -u root passwor 'Oracle123'  -S '/tmp/mysql.sock';
mysqladmin: [Warning] Using a password on the command line interface can be insecure.
Warning: Since password will be sent to server in plain text, use ssl connection to ensure password safety.

密码登陆:

[mysql@hd1 bin]$ mysql -uroot -p -S '/tmp/mysql.sock' 
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.20 MySQL Community Server (GPL)

Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show database;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'database' at line 1
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

到这里MySql安装完成。

部署hive:

解压

配置:[hadoop@hd1 conf]$ more hive-site.xml ,

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>        
 <property>                
   <name>hive.metastore.local</name>                
   <value>true</value>        
 </property>        

 <property>                
  <name>javax.jdo.option.ConnectionURL</name>                
  <value>jdbc:mysql://192.168.83.11:3306/hive?characterEncoding=UTF-8</value>        
 </property> 

 <property>                
   <name>javax.jdo.option.ConnectionDriverName</name>                
   <value>com.mysql.jdbc.Driver</value>        
 </property>      

 <property>                
   <name>javax.jdo.option.ConnectionUserName</name>                
   <value>root</value>        
 </property>      

 <property>                
    <name>javax.jdo.option.ConnectionPassword</name>                
    <value>Oracle123</value><!-- 这里是数据库密码 -->         
 </property>
</configuration>

这里配置访问MySql的帐号密码,须要把mysql的驱动包(mysql-connector-java.jar)放到Hive/lib下面。

在MySql里面事先须要建立爱你一个hive存放元数据的库。

mysql> create database hive;
Query OK, 1 row affected (0.11 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hive               |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)

启动hive :

 

[hadoop@hd1 conf]$ hive 
which: no hbase in (/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/java/jdk1.8.0_11/bin:/usr/java/jdk1.8.0_11/bin:/home/hadoop/bin:/home/hadoop/hadoop-2.6.0-cdh5.7.0/bin:/home/hadoop/hadoop-2.6.0-cdh5.7.0/sbin:/home/hadoop/hive-1.1.0-cdh5.7.0/bin)
18/10/24 03:14:34 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist

Logging initialized using configuration in jar:file:/home/hadoop/hive-1.1.0-cdh5.7.0/lib/hive-common-1.1.0-cdh5.7.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.