跟着教学试着用Idea编程,实现Spark查询Hive中的表。结果上来就凉了。sql
捣鼓很久都不行,在网上查有说将hive-site.xml放到resource目录就行,还有什么hadoop针对windows用户的权限问题,结果都是扯淡。apache
其实问题仍是处在代码上,直接附上代码了,缘由下载注释里编程
Spark Hive操做windows
package sparkSql import org.apache.spark.sql.SparkSession /** * Created with IntelliJ IDEA. */ object SparkHiveSQL { def main(args: Array[String]): Unit = { //想要经过Spark操做hive SparkSession必需要调用enableHiveSupport(),不然没法查询到Hive val spark = SparkSession .builder() .appName("Spark Hive") .master("spark://192.168.194.131:7077") .enableHiveSupport() .getOrCreate() val df1 = spark.sql("select * from default.src") df1.show() } }