系统:windows x64html
内存:4Gjava
spark版本:spark-1.6.0-bin-hadoop2.6git
JDK版本:jdk1.7.0_031github
spark安装步骤:spring
1.spark安装包,下载地址 https://spark.apache.org/downloads.html shell
2.解压下载好的安装包,彷佛路径最好不要含空格(没有试过含空格会怎么样)apache
3.环境变量中配置SPARK_HOME,path中指定到SPARK_HOME下的bin目录。windows
4.如今运行的是本地模式,能够不用安装hadoop,可是windows下须要配置HADOOP_HOME,并在HADOOP_HOME/bin下放一个winutils.exe的文件,具体见https://github.com/spring-projects/spring-hadoop/wiki/Using-a-Windows-client-together-with-a-Linux-cluster eclipse
5.打开CMD,试试spark-shell命令可否运行成功。oop
可能遇到问题1:关于xerces.jar包的问题,多是jar包冲突致使,最直接解决办法:从新下载一个jdk
可能遇到问题2:spark-shell throws java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable.这个错误,彷佛是hive的一个BUG,具体能够见https://issues.apache.org/jira/browse/SPARK-10528
配置Eclipse的java开发环境:
java开发spark程序是依赖一个jar,位于SPARK_HOME/lib/spark-assembly-1.6.0-hadoop2.6.0.jar,直接导入eclipse便可,spark运行只支持1.6以上java环
最后附上一个WordCount程序,是经过hdfs读取和输出的文件
SparkConf conf = new SparkConf().setAppName("WordCount").setMaster("local"); JavaSparkContext context = new JavaSparkContext(conf); JavaRDD<String> textFile = context.textFile("hdfs://192.168.1.201:8020/data/test/sequence/sequence_in/file1.txt"); JavaRDD<String> words = textFile.flatMap(new FlatMapFunction<String, String>() { public Iterable<String> call(String s) {return Arrays.asList(s.split(" "));} }); JavaPairRDD<String, Integer> pairs = words.mapToPair(new PairFunction<String, String, Integer>() { public Tuple2<String, Integer> call(String s) { return new Tuple2<String, Integer>(s, 1); } }); JavaPairRDD<String, Integer> counts = pairs.reduceByKey(new Function2<Integer, Integer, Integer>() { public Integer call(Integer a, Integer b) { return a + b; } }); counts.saveAsTextFile("hdfs://192.168.1.201:8020/data/test/sequence/sequence_out/");