spark_spark链接hive config

1 环境java

hadoop2.7.3mysql

apache-hive-2.1.1-binsql

spark-2.1.0-bin-hadoop2.6shell

jdk1.8数据库

 

2 配置文件apache

在hive-site.xml中配置mysql数据库链接。oop

cp apache-hive-2.1.1-bin/conf/hive-site.xml  ./spark-2.1.0-bin-hadoop2.6/conf/spa

cp apache-hive-2.1.1-bin/lib/mysql-connector-java-5.1.40-bin.jar ./spark-2.1.0-bin-hadoop2.6/jarsscala

 

3 启动xml

启动hadoop : ./hadoop-2.7.3/sbin/start-all.sh

启动mysql :  service mysql start

启动hive :  ./apache-hive-2.1.1-bin/bin/hive

启动spark : ./spark-2.1.0-bin-hadoop2.6/bin/spark-sql 验证是否正常链接hive,查询语法同hive一致。 (i.e. show tables;)

      或者 ./spark-2.1.0-bin-hadoop2.6/bin/spark-shell 运行scala程序

相关文章
相关标签/搜索