spark ha

export JAVA_HOME=/usr/java/jdk1.7.0_71 export HADOOP_CONF_DIR={hadoop-home}/etc/hadoop export SPARK_WORKER_CORES=4 # 这个是可以使用core export SPARK_WORKER_MEMORY=12g # 这个可以使用内存 export SPARK_MASTER_IP={ip_addr} # 主要是 用于避免多网卡 的问题 export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url={zeekeeper-address}:2181,{zeekeeper-address}:2181
-Dspark.deploy.zookeeper.dir=/spark" ```node

* vi log4j.properties
 在生产环境下 应该使用error 模式,能打打减小 空间消耗
       >log4j.rootCategory=INFO, console 
 	   改成
 	   log4j.rootCategory=ERROR, console
 * vi slaves
  加上其它节点hostname 
 * vi spark-defaults.conf 
       >spark.serialize                                  org.apache.spark.serializer.KryoSerializer 取消其注释
 * 制做job启动脚本 **可选操做**  只适用于如今的job 结构    
 ```
		#!/bin/bash

		Spark_Master=spark://{HA 就两个地址,若是不开就一个地址}:7077
		if [ -z "$2" ]; then
		  echo "jars path is required param."
		  echo "Usage: run.sh start   <mainJar> <mainClass>  <isbackground (true/false) >"
		 echo "Usage: run.sh stop <mainJar>  "
		  exit 1
		fi
		bin="`dirname "$0"`"
		bin="`cd "$bin"; pwd`"
		. "$bin/../conf/spark-env.sh"
		jardir="`dirname "$2"`"
		PID_FILE=$jardir/spark.pid
		for jarz in $jardir/lib/*.jar; do 
            if [ "$libs" != "" ]; then
                  libs=$libs,$jarz
            else
                  libs=$jarz
            fi
        done
		case "$1" in
		start)
			if [ -z "$4" ];then
				$bin/spark-submit --master $Spark_Master --jars   $libs --class  $3 $2 
			else
				if [ "$4" != "true" ];then					
				$bin/spark-submit \
					--master $Spark_Master \
					--jars $libs \
					--class $3 $2 	
				else
				nohup $bin/spark-submit \
					--master $Spark_Master \
					--jars $libs \
					--class $3 $2  \
				    >$jardir/stdout.log 2> $jardir/stderr.log &
				    echo $! > $PID_FILE
				fi
			fi
		;;
		stop)
           if [ -e $PID_FILE ] ; then
				pid=`cat $PID_FILE`	
        		kill -9 $pid
    	   else
        		echo "[ERROR] Cannot find $PID_FILE !"
    	   fi

		esac
```
 * 发送spark 到 master2 和其它node
       >scp -r {spark-install-dir}/spark-xxx other-node:{spark-install-dir}  
 * master2 修改ip 地址
       > vi {spark-home}/conf/spark-env.sh
      ```

export SPARK_MASTER_IP={ip_addr} ``` * 启动 master1 > {spark-home}/sbin/start-all.sh master2 > {spark-home}/sbin/start-master.sh * 打开网页地址 master1:7077apache

相关文章
相关标签/搜索