Hive 是一个基于 hadoop 的开源数据仓库工具,用于存储和处理海量结构化数据。它把海量数据存储于 hadoop 文件系统,而不是数据库,但提供了一套类数据库的数据存储和处理机制,并采用 HQL (类 SQL )语言对这些数据进行自动化管理和处理。咱们能够把 Hive 中海量结构化数据当作一个个的表,而实际上这些数据是分布式存储在 HDFS 中的。 Hive 通过对语句进行解析和转换,最终生成一系列基于 hadoop 的 map/reduce 任务,经过执行这些任务完成数据处理。 java
Hive 诞生于 facebook 的日志分析需求,面对海量的结构化数据, Hive 以较低的成本完成了以往须要大规模数据库才能完成的任务,而且学习门槛相对较低,应用开发灵活而高效。 node
Hive 自 2009.4.29 发布第一个官方稳定版 0.3.0 至今,不过一年的时间,正在慢慢完善,网上能找到的相关资料至关少,尤为中文资料更少,本文结合业务对 Hive 的应用作了一些探索,并把这些经验作一个总结,所谓前车可鉴,但愿读者能少走一些弯路。mysql
JDK:1.8 Hadoop Release:2.7.4 centos:7.3 node1(master) 主机: 192.168.252.121 node2(slave1) 从机: 192.168.252.122 node3(slave2) 从机: 192.168.252.123 node4(mysql) 从机: 192.168.252.124
安装Apache Hive
前提是要先安装hadoop
集群,而且hive只须要在hadoop的namenode节点集群里安装便可(须要在有的namenode上安装),能够不在datanode节点的机器上安装。还须要说明的是,虽然修改配置文件并不须要把hadoop运行起来,可是本文中用到了hadoop的hdfs命令,在执行这些命令时你必须确保hadoop是正在运行着的,并且启动hive的前提也须要hadoop在正常运行着,因此建议先把hadoop集群启动起来。sql
安装MySQL
用于存储 Hive 的元数据(也能够用 Hive 自带的嵌入式数据库 Derby,可是 Hive 的生产环境通常不用 Derby),这里只须要安装 MySQL 单机版便可,若是想保证高可用的化,也能够部署 MySQL 主从模式;数据库
Hadoopapache
Hadoop-2.7.4 集群快速搭建segmentfault
MySQL 随意任选其一centos
CentOs7.3 安装 MySQL 5.7.19 二进制版本app
搭建 MySQL 5.7.19 主从复制,以及复制实现细节分析maven
su hadoop cd /home/hadoop/ wget https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-2.3.0/apache-hive-2.3.0-bin.tar.gz tar -zxvf apache-hive-2.3.0-bin.tar.gz mv apache-hive-2.3.0-bin hive-2.3.0
若是是对全部的用户都生效就修改vi /etc/profile
文件
若是只针对当前用户生效就修改 vi ~/.bahsrc
文件
sudo vi /etc/profile
#hive export PATH=${HIVE_HOME}/bin:$PATH export HIVE_HOME=/home/hadoop/hive-2.3.0/
使环境变量生效,运行 source /etc/profile
使/etc/profile
文件生效
cd /home/hadoop/hive-2.3.0/conf cp hive-default.xml.template hive-site.xml
使用 hadoop 新建 hdfs 目录,由于在 hive-site.xml 中有默认以下配置:
<property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> <description>location of default database for the warehouse</description> </property> <property>
进入 hadoop 安装目录 执行hadoop命令新建/user/hive/warehouse目录,并受权,用于存储文件
cd /home/hadoop/hadoop-2.7.4 bin/hadoop fs -mkdir -p /user/hive/warehouse bin/hadoop fs -mkdir -p /user/hive/tmp bin/hadoop fs -mkdir -p /user/hive/log bin/hadoop fs -chmod -R 777 /user/hive/warehouse bin/hadoop fs -chmod -R 777 /user/hive/tmp bin/hadoop fs -chmod -R 777 /user/hive/log
用如下命令检查目录是否建立成功
bin/hadoop fs -ls /user/hive
搜索hive.exec.scratchdir,将该name对应的value修改成/user/hive/tmp
<property> <name>hive.exec.scratchdir</name> <value>/user/hive/tmp</value> </property>
搜索hive.querylog.location,将该name对应的value修改成/user/hive/log/hadoop
<property> <name>hive.querylog.location</name> <value>/user/hive/log/hadoop</value> <description>Location of Hive run time structured log file</description> </property>
搜索javax.jdo.option.connectionURL,将该name对应的value修改成MySQL的地址
<property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.252.124:3306/hive?createDatabaseIfNotExist=true</value> <description> JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. </description> </property>
搜索javax.jdo.option.ConnectionDriverName,将该name对应的value修改成MySQL驱动类路径
<property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property>
搜索javax.jdo.option.ConnectionUserName,将对应的value修改成MySQL数据库登陆名
<property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>Username to use against metastore database</description> </property>
搜索javax.jdo.option.ConnectionPassword,将对应的value修改成MySQL数据库的登陆密码
<property> <name>javax.jdo.option.ConnectionPassword</name> <value>mima</value> <description>password to use against metastore database</description> </property>
mkdir /home/hadoop/hive-2.3.0/tmp
并在 hive-site.xml
中修改
把{system:java.io.tmpdir}
改为 /home/hadoop/hive-2.3.0/tmp
把 {system:user.name}
改为 {user.name}
cp hive-env.sh.template hive-env.sh vi hive-env.sh HADOOP_HOME=/home/hadoop/hadoop-2.7.4/ export HIVE_CONF_DIR=/home/hadoop/hive-2.3.0/conf export HIVE_AUX_JARS_PATH=/home/hadoop/hive-2.3.0/lib
cd /home/hadoop/hive-2.3.0/lib wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.38/mysql-connector-java-5.1.38.jar
首先确保 mysql 中已经建立 hive
库
cd /home/hadoop/hive-2.3.0/bin ./schematool -initSchema -dbType mysql
若是看到以下,表示初始化成功
Starting metastore schema initialization to 2.3.0 Initialization script hive-schema-2.3.0.mysql.sql Initialization script completed schemaTool completed
/usr/local/mysql/bin/mysql -uroot -p
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | hive | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)
mysql> use hive; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +---------------------------+ | Tables_in_hive | +---------------------------+ | AUX_TABLE | | BUCKETING_COLS | | CDS | | COLUMNS_V2 | | COMPACTION_QUEUE | | COMPLETED_COMPACTIONS | | COMPLETED_TXN_COMPONENTS | | DATABASE_PARAMS | | DBS | | DB_PRIVS | | DELEGATION_TOKENS | | FUNCS | | FUNC_RU | | GLOBAL_PRIVS | | HIVE_LOCKS | | IDXS | | INDEX_PARAMS | | KEY_CONSTRAINTS | | MASTER_KEYS | | NEXT_COMPACTION_QUEUE_ID | | NEXT_LOCK_ID | | NEXT_TXN_ID | | NOTIFICATION_LOG | | NOTIFICATION_SEQUENCE | | NUCLEUS_TABLES | | PARTITIONS | | PARTITION_EVENTS | | PARTITION_KEYS | | PARTITION_KEY_VALS | | PARTITION_PARAMS | | PART_COL_PRIVS | | PART_COL_STATS | | PART_PRIVS | | ROLES | | ROLE_MAP | | SDS | | SD_PARAMS | | SEQUENCE_TABLE | | SERDES | | SERDE_PARAMS | | SKEWED_COL_NAMES | | SKEWED_COL_VALUE_LOC_MAP | | SKEWED_STRING_LIST | | SKEWED_STRING_LIST_VALUES | | SKEWED_VALUES | | SORT_COLS | | TABLE_PARAMS | | TAB_COL_STATS | | TBLS | | TBL_COL_PRIVS | | TBL_PRIVS | | TXNS | | TXN_COMPONENTS | | TYPES | | TYPE_FIELDS | | VERSION | | WRITE_SET | +---------------------------+ 57 rows in set (0.00 sec)
启动Hive
cd /home/hadoop/hive-2.3.0/bin ./hive
建立 hive 库
hive> create database ymq; OK Time taken: 0.742 seconds
选择库
hive> use ymq; OK Time taken: 0.036 seconds
建立表
hive> create table test (mykey string,myval string); OK Time taken: 0.569 seconds
插入数据
hive> insert into test values("1","www.ymq.io"); WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. Query ID = hadoop_20170922011126_abadfa44-8ebe-4ffc-9615-4241707b3c03 Total jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1506006892375_0001, Tracking URL = http://node1:8088/proxy/application_1506006892375_0001/ Kill Command = /home/hadoop/hadoop-2.7.4//bin/hadoop job -kill job_1506006892375_0001 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2017-09-22 01:12:12,763 Stage-1 map = 0%, reduce = 0% 2017-09-22 01:12:20,751 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.24 sec MapReduce Total cumulative CPU time: 1 seconds 240 msec Ended Job = job_1506006892375_0001 Stage-4 is selected by condition resolver. Stage-3 is filtered out by condition resolver. Stage-5 is filtered out by condition resolver. Moving data to directory hdfs://node1:9000/user/hive/warehouse/ymq.db/test/.hive-staging_hive_2017-09-22_01-11-26_242_8022847052615616955-1/-ext-10000 Loading data to table ymq.test MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 1.24 sec HDFS Read: 4056 HDFS Write: 77 SUCCESS Total MapReduce CPU Time Spent: 1 seconds 240 msec OK Time taken: 56.642 seconds
查询数据
hive> select * from test; OK 1 www.ymq.io Time taken: 0.253 seconds, Fetched: 1 row(s)
在界面上查看刚刚写入的hdfs数据