Hive是基于Hadoop的一个数据仓库工具,用来进行数据提取、转化、加载,这是一种能够存储、查询和分析存储在Hadoop中的大规模数据的机制。hive数据仓库工具能将结构化的数据文件映射为一张数据库表,并提供SQL查询功能,能将SQL语句转变成MapReduce任务来执行。
hadoop151 | hadoop152 | hadoop153 | |
---|---|---|---|
hive&mysql | √ |
切换到root用户下,卸载本机自带的数据库。java
[root@hadoop151 software]$ rpm -qa | grep mariadb [root@hadoop151 software]$ yum -y remove mariadb-libs-5.5.60-1.el7_5.x86_64
安装MySQLnode
[root@hadoop151 mysql-libs]$ rpm -ivh MySQL-server-5.6.24-1.el6.x86_64.rpm [root@hadoop151 mysql-libs]$ cat /root/.mysql_secret # 查看初始密码 [root@hadoop102 mysql-libs]$ service mysql start # 启动MySQL服务
登陆MySQL,设置密码mysql
[root@hadoop151 mysql-libs]$ mysql -uroot -p密码 mysql> set password=password('000000'); # 设置密码为六个0
配置MySQL远程登录sql
mysql> use mysql; mysql> show tables; mysql> desc user; mysql> select User, Host, Password from user; mysql> update user set host='%' where host='localhost'; /* 修改user表,把Host表内容修改成% */ mysql> delete from user where Host='hadoop151'; /* 删除root用户的其余host */ mysql> delete from user where Host='127.0.0.1'; mysql> delete from user where Host='::1'; mysql> flush privileges;
设置MySQL编码为UTF-8。
修改my.cnf文件(这个文件可能在/etc目录下,也可能在/usr目录下),在[mysqld]这一栏中添加以下内容:数据库
init_connect='SET collation_connection = utf8_unicode_ci' init_connect='SET NAMES utf8' character-set-server=utf8 collation-server=utf8_unicode_ci skip-character-set-client-handshake
重启MySQL服务,查看编码。apache
[root@hadoop151 usr]$ service mysql restart [root@hadoop151 usr]$ mysql -uroot -p000000 mysql > show variables like '%char%' +--------------------------+----------------------------+ | Variable_name | Value | +--------------------------+----------------------------+ | character_set_client | utf8 | | character_set_connection | utf8 | | character_set_database | utf8 | | character_set_filesystem | binary | | character_set_results | utf8 | | character_set_server | utf8 | | character_set_system | utf8 | | character_sets_dir | /usr/share/mysql/charsets/ | +--------------------------+----------------------------+
解压并重命名vim
[hadoop@hadoop151 software]$ tar -zxvf apache-hive-1.2.1-bin.tar.gz -C /opt/module/ [hadoop@hadoop151 module]$ mv apache-hive-1.2.1-bin/ hive
将MySQL驱动拷贝到“hive/lib”目录下。框架
[hadoop@hadoop151 mysql-connector-java-5.1.27]$ cp mysql-connector-java-5.1.27-bin.jar /opt/module/hive/lib/
在“hive/conf”目录下重命名“hive-env.sh.template”并修改其中内容。工具
[hadoop@hadoop151 conf]$ mv hive-env.sh.template hive-env.sh [hadoop@hadoop151 conf]$ vim hive-env.sh HADOOP_HOME=/opt/module/hadoop export HIVE_CONF_DIR=/opt/module/hive/conf
在“hive/conf”目录下重命名“hive-log4j.properties.template”文件oop
[hadoop@hadoop151 conf]$ mv hive-log4j.properties.template hive-log4j.properties [hadoop@hadoop151 conf]$ vim hive-log4j.properties hive.log.dir=/opt/module/hive/logs
在“hive/conf”目录下新建hive-site.xml文件,配置Metastore到MySQL。
[hadoop@hadoop151 conf]$ vim hive-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop151:3306/metastore?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>000000</value> <description>password to use against metastore database</description> </property> <property> <name>hive.cli.print.header</name> <value>true</value> </property> <property> <name>hive.cli.print.current.db</name> <value>true</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> </configuration>
在“hive/bin”目录下初始化MySQL和Hive
[hadoop@hadoop151 bin]$ ./schematool -initSchema -dbType mysql
进入MySQL,修改“metastore”数据库。
mysql> use metastore; /* 修改表字段注解和表注解 */ alter table COLUMNS_V2 modify column COMMENT varchar(256) character set utf8; alter table TABLE_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8; /* 修改分区字段注解 */ alter table PARTITION_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8; alter table PARTITION_KEYS modify column PKEY_COMMENT varchar(4000) character set utf8; /* 修改索引注解 */ alter table INDEX_PARAMS modify column PARAM_VALUE varchar(4000) character set utf8;
Tez是Apache开源的支持DAG做业的计算框架,它直接源于MapReduce框架,核心思想是将Map和Reduce两个操做进一步拆分,即Map被拆分红Input、Processor、Sort、Merge和Output, Reduce被拆分红Input、Shuffle、Sort、Merge、Processor和Output等,这样,这些分解后的元操做能够任意灵活组合,产生新的操做,这些操做通过一些控制程序组装后,可造成一个大的DAG做业。
解压并重命名。
[hadoop@hadoop151 software]$ tar -zxvf apache-tez-0.9.1-bin.tar.gz -C /opt/module/ [hadoop@hadoop151 module]$ mv apache-tez-0.9.1-bin/ tez-0.9.1
在“hive-env.sh”文件中添加tez环境变量配置和依赖包环境变量配置。
export TEZ_HOME=/opt/module/tez-0.9.1 export TEZ_JARS="" for jar in `ls $TEZ_HOME |grep jar`; do export TEZ_JARS=$TEZ_JARS:$TEZ_HOME/$jar done for jar in `ls $TEZ_HOME/lib`; do export TEZ_JARS=$TEZ_JARS:$TEZ_HOME/lib/$jar done export HIVE_AUX_JARS_PATH=/opt/module/hadoop/share/hadoop/common/hadoop-lzo-0.4.20.jar$TEZ_JARS
在“hive-site.xml”文件中添加以下配置,更改hive计算引擎。
<property> <name>hive.execution.engine</name> <value>tez</value> </property>
在Hive的“/opt/module/hive/conf”下面建立一个tez-site.xml文件。
[hadoop@hadoop151 conf]$ vim tez-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>tez.lib.uris</name> <value>${fs.defaultFS}/tez/tez-0.9.1,${fs.defaultFS}/tez/tez-0.9.1/lib</value> </property> <property> <name>tez.lib.uris.classpath</name> <value>${fs.defaultFS}/tez/tez-0.9.1,${fs.defaultFS}/tez/tez-0.9.1/lib</value> </property> <property> <name>tez.use.cluster.hadoop-libs</name> <value>true</value> </property> <property> <name>tez.history.logging.service.class</name> <value>org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService</value> </property> </configuration>
将“/opt/module/tez-0.9.1”上传到HDFS的/tez路径。
[hadoop@hadoop151 conf]$ hadoop fs -mkdir /tez [hadoop@hadoop151 conf]$ hadoop fs -put /opt/module/tez-0.9.1/ /tez
修改“hadoop/etc/hadoop”目录下的yarn-site.xml文件,避免遇到运行Tez时检查到用过多内存而被NodeManager杀死进程问题。 修改完yarn-site.xml文件后分发到各集群上 。
<property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property>
启动hive,在hive中建库建表插入数据测试(记得要启动hadoop集群)。
[hadoop@hadoop151 bin]$ ./hive