目前Apache Spark支持三种分布式部署方式,分别是standalone、spark on mesos和 spark on YARN,详情参考。html
主机名 | 应用 |
---|---|
tvm11 | zookeeper |
tvm12 | zookeeper |
tvm13 | zookeeper、spark(master)、spark(slave)、Scala |
tvm14 | spark(backup)、spark(slave)、Scala |
tvm15 | spark(slave)、Scala |
依赖scala:java
Note that support for Java 7, Python 2.6 and old Hadoop versions before 2.6.5 were removed as of Spark 2.2.0. Support for Scala 2.10 was removed as of 2.3.0. Support for Scala 2.11 is deprecated as of Spark 2.4.1 and will be removed in Spark 3.0.web
zookeeper: Master结点存在单点故障,因此要借助zookeeper,至少启动两台Master结点来实现高可用,配置方案比较简单。sql
由上面的说明可知,spark对scala版本依赖较为严格,spark-2.4.5依赖scala-2.12.x,因此首先要安装scala-2.12.x,在此选用scala-2.12.10。使用二进制安装:apache
下载安装包bash
解压即用。服务器
$ wget https://downloads.lightbend.com/scala/2.12.10/scala-2.12.10.tgz $ tar zxvf scala-2.12.10.tgz -C /path/to/scala_install_dir
若是系统环境也要使用相同版本的scala,能够将其加入到用户环境变量(.bashrc或.bash_profile
)。app
打通三台spark机器的work用户ssh通道;ssh
如今安装包到master机器:tvm13;分布式
注意提示信息,及Hadoop版本(与已有环境匹配,若是不匹配则选非预编译的版本本身编译)。
解压到安装目录便可。
spark服务配置文件主要有两个:spark-env.sh和slaves。
spark-evn.sh:配置spark运行相关环境变量
slaves:指定worker服务器
配置spark-env.sh:cp spark-env.sh.template spark-env.sh
export JAVA_HOME=/data/template/j/java/jdk1.8.0_201export SCALA_HOME=/data/template/s/scala/scala-2.12.10export SPARK_WORKER_MEMORY=2048mexport SPARK_WORKER_CORES=2export SPARK_WORKER_INSTANCES=2export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=tvm11:2181,tvm12:2181,tvm13:2181 -Dspark.deploy.zookeeper.dir=/data/template/s/spark"# 关于 SPARK_DAEMON_JAVA_OPTS 参数含义: # -Dspark.deploy.recoverMode=ZOOKEEPER #表明发生故障使用zookeeper服务 # -Dspark.depoly.zookeeper.url=master.hadoop,slave1.hadoop,slave1.hadoop #主机名的名字 # -Dspark.deploy.zookeeper.dir=/spark #spark要在zookeeper上写数据时的保存目录# 其余参数含义:https://blog.csdn.net/u010199356/article/details/89056304
配置slaves:cp slaves.template slaves
# A Spark Worker will be started on each of the machines listed below.tvm13 tvm14 tvm15
配置 spark-default.sh
,主要用于spark执行任务(能够命令行动态指定):
# http://spark.apache.org/docs/latest/configuration.html#configuring-logging# spark-defaults.shspark.app.name YunTuSpark spark.driver.cores 2 spark.driver.memory 2g spark.master spark://tvm13:7077,tvm14:7077 spark.eventLog.enabled truespark.eventLog.dir hdfs://cluster01/tmp/event/logs spark.serializer org.apache.spark.serializer.KryoSerializer spark.serializer.objectStreamReset 100 spark.executor.logs.rolling.time.interval daily spark.executor.logs.rolling.maxRetainedFiles 30 spark.ui.enabled truespark.ui.killEnabled truespark.ui.liveUpdate.period 100ms spark.ui.liveUpdate.minFlushPeriod 3s spark.ui.port 4040 spark.history.ui.port 18080 spark.ui.retainedJobs 100 spark.ui.retainedStages 100 spark.ui.retainedTasks 1000 spark.ui.showConsoleProgress truespark.worker.ui.retainedExecutors 100 spark.worker.ui.retainedDrivers 100 spark.sql.ui.retainedExecutions 100 spark.streaming.ui.retainedBatches 100 spark.ui.retainedDeadExecutors 100# spark.executor.extraJavaOptions -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
由于 spark.eventLog.dir
指定为hdfs存储,因此须要在hdfs预先建立相应的目录文件:郑州不孕不育医院:http://jbk.39.net/yiyuanfengcai/tsyl_zztjyy/987/
hdfs dfs -mkdir -p hdfs://cluster01/tmp/event/logs
编辑 ~/.bashrc
:郑州试管婴儿医院:http://www.changhong120.com/
export SPARK_HOME=/data/template/s/spark/spark-2.4.5-bin-hadoop2.7export PATH=$SPARK_HOME/bin/:$PATH
以上配置完成后,将 /path/to/spark-2.4.5-bin-hadoop2.7
分发至各个slave节点,并配置各个节点的环境变量。
先在master节点启动全部服务:./sbin/start-all.sh
而后在backup节点单独启动master服务:./sbin/start-master.sh
启动完成后到web去查看:
master(8081端口):Status: ALIVE
backup(8080端口):Status: STANDBY