hiveserver2链接出错以下:Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop01:10000: java.net.ConnectException: Connection refused (Connection refused) (state=08S01,code=0)
1.看hiveserver2服务是否启动
[root@hadoop01 ~]# jps 5101 RunJar # 启动正常
2.看Hadoop安全模式是否关闭
[root@hadoop01 ~]# hdfs dfsadmin -safemode get Safe mode is OFF # 表示正常
若是为:Safe mode is ON 处理方法见https://www.cnblogs.com/-xiaoyu-/p/11399287.htmlhtml
3.浏览器打开http://hadoop01:50070/看Hadoop集群是否正常启动
4.看MySQL服务是否启动
[root@hadoop01 ~]# service mysqld status Redirecting to /bin/systemctl status mysqld.service ● mysqld.service - MySQL 8.0 database server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; disabled; vendor preset: disabled) Active: active (running) since Sun 2020-01-05 23:30:18 CST; 8min ago Process: 5463 ExecStartPost=/usr/libexec/mysql-check-upgrade (code=exited, status=0/SUCCESS) Process: 5381 ExecStartPre=/usr/libexec/mysql-prepare-db-dir mysqld.service (code=exited, status=0/SUCCESS) Process: 5357 ExecStartPre=/usr/libexec/mysql-check-socket (code=exited, status=0/SUCCESS) Main PID: 5418 (mysqld) Status: "Server is operational" Tasks: 46 (limit: 17813) Memory: 512.5M CGroup: /system.slice/mysqld.service └─5418 /usr/libexec/mysqld --basedir=/usr Jan 05 23:29:55 hadoop01 systemd[1]: Starting MySQL 8.0 database server... Jan 05 23:30:18 hadoop01 systemd[1]: Started MySQL 8.0 database server.
Active: active (running) since Sun 2020-01-05 23:30:18 CST; 8min ago 表示启动正常java
如没有启动则:service mysqld start 启动mysqlnode
注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意注意:mysql
必定要用本地mysql工具链接mysql服务器,看是否能正常进行链接!!!!!(只是检查)linux
如不能链接看下:web
配置只要是root用户+密码,在任何主机上都能登陆MySQL数据库。 1.进入mysql [root@hadoop102 mysql-libs]# mysql -uroot -p000000 2.显示数据库 mysql>show databases; 3.使用mysql数据库 mysql>use mysql; 4.展现mysql数据库中的全部表 mysql>show tables; 5.展现user表的结构 mysql>desc user; 6.查询user表 mysql>select User, Host, Password from user; 7.修改user表,把Host表内容修改成% mysql>update user set host='%' where host='localhost'; 8.删除root用户的其余host mysql>delete from user where Host='hadoop102'; mysql>delete from user where Host='127.0.0.1'; mysql>delete from user where Host='::1'; 9.刷新 mysql>flush privileges; 10.退出 mysql>quit;
检查mysql-connector-java-5.1.27.tar.gz驱动包是否一句放入:/root/servers/hive-apache-2.3.6/lib下面sql
<value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value> #查看mysql里面是否有上面指定的库hive 若是 mysql中没有库请看 第 7 步 mysql> show databases; +--------------------+ | Database | +--------------------+ | hive | | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.01 sec) 3306后面的hive是元数据库,能够本身指定 好比: <value>jdbc:mysql://hadoop01:3306/metastore?createDatabaseIfNotExist=true</value>
5.看Hadoop配置文件core-site.xml有没有加以下配置
<property> <name>hadoop.proxyuser.root.hosts</name> -- root为当前Linux的用户,个人是root用户 <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> 若是linux用户为本身名字 如:xiaoyu 则配置以下: <property> <name>hadoop.proxyuser.xiaoyu.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.xiaoyu.groups</name> <value>*</value> </property>
6.其余问题
# HDFS文件权限问题 <property> <name>dfs.permissions</name> <value>false</value> </property>
7.org.apache.hadoop.hive.metastore.hivemetaexception: failed to get schema version.
schematool -dbType mysql -initSchema
8.最后一句 别下载错包
apache hive-2.3.6下载地址: http://mirror.bit.edu.cn/apache/hive/hive-2.3.6/数据库
Index of /apache/hive/hive-2.3.6 Icon Name Last modified Size Description [DIR] Parent Directory -
[ ] apache-hive-2.3.6-bin.tar.gz 23-Aug-2019 02:53 221M (下载这个) [ ] apache-hive-2.3.6-src.tar.gz 23-Aug-2019 02:53 20Mapache
9.重要
全部东西都检查啦,仍是出错!!! jps查看全部机器开启的进程所有关闭,而后 重启 设备,再浏览器
开启zookeeper(若是有)
开启hadoop集群
开启mysql服务
开启hiveserver2
beeline链接
配置文件以下,仅供参考,以实际本身配置为准
hive-site.xml
<configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hadoop01:3306/hive?createDatabaseIfNotExist=true</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>root</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>12345678</value> </property> <property> <name>hive.cli.print.current.db</name> <value>true</value> </property> <property> <name>hive.cli.print.header</name> <value>true</value> </property> <property> <name>hive.server2.thrift.bind.host</name> <value>hadoop01</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> <property> <name>datanucleus.schema.autoCreateAll</name> <value>true</value> </property> <!-- <property> <name>hive.metastore.uris</name> <value>thrift://node03.hadoop.com:9083</value> </property> --> </configuration>
core-site.xml
<configuration> <!-- 指定HDFS中NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9000</value> </property> <!-- 指定Hadoop运行时产生文件的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/root/servers/hadoop-2.8.5/data/tmp</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <!-- 指定Hadoop辅助名称节点主机配置 第三台 --> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop03:50090</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> </configuration>
mapred-site.xml
<configuration> <!-- 指定MR运行在Yarn上 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- 历史服务器端地址 第三台 --> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop03:10020</value> </property> <!-- 历史服务器web端地址 --> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop03:19888</value> </property> </configuration>
yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <!-- Reducer获取数据的方式 --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <!-- 指定YARN的ResourceManager的地址 第二台 --> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop02</value> </property> <!-- 日志汇集功能使能 --> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <!-- 日志保留时间设置7天 --> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property> </configuration>