Spark彻底分布式安装

安装Spark前须要安装Hadoop,已在VM上安装了三台虚拟机shell

1、前置条件vim

一、Hadoop集群,做者已经在VM上安装了三台虚拟机,里面已安装好Hadoop集群bash

2、所需软件oop

一、 Scala:2.10spa

二、Spark:2.20scala

3、安装Scala(集群上三台机器都须要安装,下面已一台为例)code

一、下载scala-2.10.0,并解压到/usr/local/目录,做者下载的scala目录在/home/hadoop/toolsserver

hadoop@Worker1:~$ sudo mv /home/hadoop/tools/scala-2.11.0.tgz /usr/local/
hadoop@Worker1:/usr/local$ sudo tar -zxvf scala-2.11.0.tgz

二、修改~/.bashrc文件,增长SCALA_HOMEthree

hadoop@Worker1:/usr/local$ vim ~/.bashrc
export SCALA_HOME=/usr/local/scala-2.11.0
export JAVA_HOME=/usr/local/bin/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
export PATH=${SCALA_HOME}/bin:$PATH
export HADOOP_HOME=/usr/local/hadoop-2.7.3
export PATH=$PATH:${HADOOP_HOME}/bin
export SCALA_HOME=/usr/local/scala-2.11.0

三、使修改生效hadoop

hadoop@Worker1:/usr/local$ source ~/.bashrc

四、验证Scala是否安装成功

hadoop@Worker1:/usr/local$ scala -version

若是出现:Scala code runner version 2.11.0 -- Copyright 2002-2013, LAMP/EPFL,则表示安装成功

五、在其它机器上重复上面的步骤安装成功scala

4、安装Spark集群:Spark集群支持三种模式,分别是Mesos、YARN、Standalone模式,如今做者配置Standalone模式

一、下载spark-2.2.0-bin-hadoop2.7.tgz,并解压

hadoop@Master:/usr/local$ sudo mv /home/hadoop/tools/spark-2.2.0-bin-hadoop2.7.tgz /usr/local/
hadoop@Master:/usr/local$ sudo tar -zxvf spark-2.2.0-bin-hadoop2.7.tgz

二、受权

hadoop@Master:/usr/local$ sudo chown -R hadoop:root ./spark-2.2.0-bin-hadoop2.7

三、bashrc配置spark环境

hadoop@Master:/usr/local$ vim ~/.bashrc
export SCALA_HOME=/usr/local/scala-2.11.0
export JAVA_HOME=/usr/local/bin/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
export PATH=${SCALA_HOME}/bin:$PATH
export HADOOP_HOME=/usr/local/hadoop-2.7.3
export PATH=$PATH:${HADOOP_HOME}/bin
export SCALA_HOME=/usr/local/scala-2.11.0
export SPARK_HOME=/usr/local/spark-2.2.0-bin-hadoop2.7
export PATH=${SPARK_HOME}/bin:${SPARK_HOME}/sbin/sbin:$PATH

四、使配置生效

hadoop@Master:/usr/local$ source ~/.bashrc

五、进入Spark的conf目录,配置spark-env.sh

hadoop@Master:/usr/local/spark-2.2.0-bin-hadoop2.7/conf$ cp spark-env.sh.template spark-env.sh
hadoop@Master:/usr/local/spark-2.2.0-bin-hadoop2.7/conf$ vim spark-env.sh
export JAVA_HOME=/usr/local/bin/jdk1.8.0_131
export SCALA_HOME=/usr/local/scala-2.11.0
export HADOOP_HOME=/usr/local/hadoop-2.7.3
export HADOOP_CONF_DIR=/usr/local/hadoop-2.7.3/etc/hadoop
export SPARK_MASTER_IP=Master
export SPARK_WORKER_CORES=2
export SPARK_DRIVER_MEMORY=1G
export SPARK_WORKER_MEMORY=1g
export SPARK_EXECUTOR_MEMORY=1g

六、配置slaves

hadoop@Master:/usr/local/spark-2.2.0-bin-hadoop2.7/conf$ cp slaves.template slaves
hadoop@Master:/usr/local/spark-2.2.0-bin-hadoop2.7/conf$ vim slaves
Worker1
Worker2

七、配置spark-defaults.conf

spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value -Dnumbers="one two three"
spark.eventLog.enabled           true
spark.eventLog.dir               hdfs://Master:9000/historyserverforSpark
spark.yarn.historyServer.address  Master:18080
spark.history.fs.logDirectory  hdfs:Master:9000/historyserverforSpark

其中historyserverforSpark目录须要手动建立,不然在启动Spark-shell的时候会报错。

八、启动

hadoop@Master:/usr/local/spark-2.2.0-bin-hadoop2.7/sbin$ ./start-all.sh
相关文章
相关标签/搜索