本文做者:foochanehtml
本文连接:foochane.cn/article/201…java
在安装spark以前,须要安装hadoop集群环境,若是没有能够查看:Hadoop分布式集群的搭建python
软件 | 版本 | 下载地址 |
---|---|---|
linux | Ubuntu Server 18.04.2 LTS | www.ubuntu.com/download/se… |
hadoop | hadoop-2.7.1 | archive.apache.org/dist/hadoop… |
java | jdk-8u211-linux-x64 | www.oracle.com/technetwork… |
spark | spark-2.4.3-bin-hadoop2.7 | www.apache.org/dyn/closer.… |
scala | scala-2.12.5 | www.scala-lang.org/download/ |
Anaconda | Anaconda3-2019.03-Linux-x86_64.sh | www.anaconda.com/distributio… |
名称 | ip | hostname |
---|---|---|
主节点 | 192.168.233.200 | Master |
子节点1 | 192.168.233.201 | Slave01 |
子节点2 | 192.168.233.202 | Slave02 |
$ tar zxvf spark-2.4.3-bin-hadoop2.7.tgz -C /usr/local/bigdata/
$ cd /usr/local/bigdata/
$ mv spark-2.4.3-bin-hadoop2.7 spark-2.4.3
复制代码
配置文件位于/usr/local/bigdata/spark-2.4.3/conf
目录下。linux
将spark-env.sh.template
重命名为spark-env.sh
。 添加以下内容:shell
export SCALA_HOME=/usr/local/bigdata/scala
export JAVA_HOME=/usr/local/bigdata/java/jdk1.8.0_211
export HADOOP_HOME=/usr/local/bigdata/hadoop-2.7.1
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_MASTER_IP=Master
SPARK_LOCAL_DIRS=/usr/local/bigdata/spark-2.4.3
SPARK_DRIVER_MEMORY=512M
复制代码
将slaves.template
重命名为slaves
修改成以下内容:express
Slave01
Slave02
复制代码
在~/.bashrc
文件中添加以下内容,并执行$ source ~/.bashrc
命令使其生效apache
export SPARK_HOME=/usr/local/bigdata/spark-2.4.3
export PATH=$PATH:/usr/local/bigdata/spark-2.4.3/bin:/usr/local/bigdata/spark-2.4.3/sbin
复制代码
$ cd $HADOOP_HOME/sbin/
$ ./start-dfs.sh
$ ./start-yarn.sh
$ ./start-history-server.sh
复制代码
$ cd $SPARK_HOME/sbin/
$ ./start-all.sh
$ ./start-history-server.sh
复制代码
要注意的是:其实咱们已经配置的环境变量,因此执行start-dfs.sh
和start-yarn.sh
能够不切换到当前目录下,可是start-all.sh
、stop-all.sh
和/start-history-server.sh
这几个命令hadoop
目录下和spark
目录下都同时存在,因此为了不错误,最好切换到绝对路径下。ubuntu
spark启动成功后,能够在浏览器中查看相关资源状况:http://192.168.233.200:8080/,这里192.168.233.200
是Master
节点的IP浏览器
spark既可使用Scala做为开发语言,也可使用python做为开发语言。bash
spark中已经默认带有scala,若是没有或者要安装其余版本能够下载安装包安装,过程以下: 先下载安装包,而后解压
$ tar zxvf scala-2.12.5.tgz -C /usr/local/bigdata/
复制代码
而后在~/.bashrc
文件中添加以下内容,并执行$ source ~/.bashrc
命令使其生效
export SCALA_HOME=/usr/local/bigdata/scala-2.12.5
export PATH=/usr/local/bigdata/scala-2.12.5/bin:$PATH
复制代码
测试是否安装成功,能够执行以下命令:
scala -version
Scala code runner version 2.12.5 -- Copyright 2002-2018, LAMP/EPFL and Lightbe
复制代码
执行spark-shell --master spark://master:7077
命令,启动spark shell。
hadoop@Master:~$ spark-shell --master spark://master:7077
19/06/08 08:01:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://Master:4040
Spark context available as 'sc' (master = spark://master:7077, app id = app-20190608080221-0002).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 2.4.3 /_/ Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_211) Type in expressions to have them evaluated. Type :help for more information. scala> 复制代码
系统已经默认安装了python,可是为了方便开发,推荐能够直接安装Anaconda,这里下载的是安装包是Anaconda3-2019.03-Linux-x86_64.sh
,安装过程也很简单,直接执行$ bash Anaconda3-2019.03-Linux-x86_64.sh
便可。
执行命令:$ pyspark --master spark://master:7077
具体以下:
hadoop@Master:~$ pyspark --master spark://master:7077
Python 3.6.3 |Anaconda, Inc.| (default, Oct 13 2017, 12:02:49)
[GCC 7.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
19/06/08 08:12:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 2.4.3 /_/ Using Python version 3.6.3 (default, Oct 13 2017 12:02:49) SparkSession available as 'spark'. >>> >>> 复制代码