Spark之submit任务时的Initial job has not accepted any resources; check your cluster UI to ensure that wor

Spark submit任务到Spark集群时,会出现以下异常:apache

Exception 1:Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memoryoop

查看Spark logs文件spark-Spark-org.apache.spark.deploy.master.Master-1-hadoop1.out发现:spa

此时的Spark Web UI界面以下:blog

Reason:内存

以前在Spark配置文件spark-env.sh中,SPARK_LOCAL_IP的配置是localhost,内存为512M,因此在Spark UI界面中Workers显示均对应到主机hadoop1默认的的localhost,只是给它分配的workers对应不一样的端口而已,而以前最大内存为2.9G,因此5 * 512M > 2.9G的,故上述报错。hadoop

Solution:ci

修改Spark配置文件spark-env.sh,将SPARK_LOCAL_IP的localhost修改成对应的主机名称(hadoop1,hadoop2...),并修改SPARK_WORKER_MEMORY的内存配置 < 对应机器分配的内存 便可it

 

提交任务(WordCount)到Spark集群中,相应脚本以下:spark

执行脚本,运行Spark任务,过程以下:io

./runSpark.sh

 对应的WordCount结果以下:

此时对应的Spark运行UI界面以下:

相关文章
相关标签/搜索