4)spark集群搭建

1.安装spark

解压缩(/opt)shell

tar zxvf spark-1.6.1-bin-hadoop2.6.tgz
mv spark-1.6.1-bin-hadoop2.6 spark

设置环境变量bash

vi ~/.bashrc

    export SPARK_HOME=/opt/sparkjsp

    export PATH=$SPARK_HOME/binoop

    export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libspa

source ~/.bashrc

2.修改spark-env.sh

cd /opt/spark/conf
mv spark-env.sh.template spark-env.sh
vi spark-env.sh

    export JAVA_HOME=/usr/jdk1.7.0_55scala

    export SCALA_HOME=/opt/scalacode

    export SPARK_MASTER_IP=192.168.252.164hadoop

    export SPARK_WORKER_MEMORY=1gspark

    export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoopast

3.修改slaves

    hadoop001

    hadoop002

    hadoop003

4.二三节点

复制spark目录到二三节点

cd /opt
scp -r spark hadoop002:/opt
scp -r spark hadoop003:/opt

复制环境变量到二三节点

scp ~/.bashrc hadoop002:~
scp ~/.bashrc hadoop003:~

5.启动

启动spark集群

    start-all.sh

验证

    jsp或8080端口验证集群启动成功

    hadoop001:master

    hadoop002:worker

    hadoop003:worker

    进入spark-shell查看是否正常

相关文章
相关标签/搜索