Spark包含了大数据领域常见的各类计算框架java
不彻底对python
由于咱们只能使用spark core代替mr作离线计算,数据的存储仍是要依赖hdfssql
Spark+Hadoop的组合,才是将来大数据领域最热门的组合,也是最有前景的组合! shell
速度api
容易使用缓存
一站式解决方案app
能够运行在任意的平台框架
抽象层次低,须要手工编写代码来完成,使用上难以上手机器学习
只提供两个操做,Map和Reduce,表达力欠缺分布式
一个Job只有Map和Reduce两个阶段
中间结果也放在HDFS文件系统中(速度慢)
延迟高,只适用Batch数据处理,对于交互式数据处理,实时数据处理的支持不够
对于迭代式数据处理性能比较差
==所以,Hadoop MapReduce会被新一代的大数据处理平台替代是技术发展的趋势,而在新一代的大数据处理平台中,Spark目前获得了最普遍的承认和支持==
准备安装包spark-2.2.0-bin-hadoop2.7.tgz
tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz -C /opt/ mv spark-2.2.0-bin-hadoop2.7/ spark
修改spark-env.sh
export JAVA_HOME=/opt/jdk export SPARK_MASTER_IP=uplooking01 export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=4 export SPARK_WORKER_INSTANCES=1 export SPARK_WORKER_MEMORY=2g export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
配置环境变量
#配置Spark的环境变量 export SPARK_HOME=/opt/spark export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
启动单机版spark
start-all-spark.sh
查看启动
http://uplooking01:8080
配置spark-env.sh
[root@uplooking01 /opt/spark/conf] export JAVA_HOME=/opt/jdk #配置master的主机 export SPARK_MASTER_IP=uplooking01 #配置master主机通讯的端口 export SPARK_MASTER_PORT=7077 #配置spark在每一个worker中使用的cpu核数 export SPARK_WORKER_CORES=4 #配置每一个主机有一个worker export SPARK_WORKER_INSTANCES=1 #worker的使用内存是2gb export SPARK_WORKER_MEMORY=2g #hadoop的配置文件中的目录 export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
配置slaves
[root@uplooking01 /opt/spark/conf] uplooking03 uplooking04 uplooking05
分发spark
[root@uplooking01 /opt/spark/conf] scp -r /opt/spark uplooking02:/opt/ scp -r /opt/spark uplooking03:/opt/ scp -r /opt/spark uplooking04:/opt/ scp -r /opt/spark uplooking05:/opt/
分发uplooking01上配置的环境变量
[root@uplooking01 /] scp -r /etc/profile uplooking02:/etc/ scp -r /etc/profile uplooking03:/etc/ scp -r /etc/profile uplooking04:/etc/ scp -r /etc/profile uplooking05:/etc/
启动spark
[root@uplooking01 /] start-all-spark.sh
先中止正在运行的spark集群
修改spark-env.sh
#注释如下这两行内容 #export SPARK_MASTER_IP=uplooking01 #export SPARK_MASTER_PORT=7077
添加内容
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=uplooking03:2181,uplooking04:2181,uplooking05:2181 -Dspark.deploy.zookeeper.dir=/spark"
分发修改的[配置
scp /opt/spark/conf/spark-env.sh uplooking02:/opt/spark/conf scp /opt/spark/conf/spark-env.sh uplooking03:/opt/spark/conf scp /opt/spark/conf/spark-env.sh uplooking04:/opt/spark/conf scp /opt/spark/conf/spark-env.sh uplooking05:/opt/spark/conf
启动集群
[root@uplooking01 /] start-all-spark.sh
[root@uplooking02 /] start-master.sh
spark-shell --master spark://uplooking01:7077 #spark-shell能够在启动时指定spark-shell这个application使用的资源(总核数,每一个work上使用的内存) spark-shell --master spark://uplooking01:7077 --total-executor-cores 6 --executor-memory 1g #若是不指定 默认使用每一个worker上所有的核数,和每一个worker上的1g内存
sc.textFile("hdfs://ns1/sparktest/").flatMap(_.split(",")).map((_,1)).reduceByKey(_+_).collect
Master
Worker
Spark-Submitter ===> Driver