执行”spark-shell –master yarn –deploy-mode client”,虚拟内存大小溢出,报错

在Hadoop 2.7.2集群下执行以下命令:html

spark-shell  --master yarn --deploy-mode clientnode

爆出下面的错误:shell

org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.apache

在Yarn WebUI上面查看启动的Cluster状态,log显示为:app

Container [pid=28920,containerID=container_1389136889967_0001_01_000121] is running beyond virtual memory limits. Current
usage: 1.2 GB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.

这是因为虚拟内存大小超过了设定的数值,能够修改配置,进行规避。oop

There is a check placed at Yarn level for Vertual and Physical memory usage ratio. Issue is not only that VM doesn't have sufficient pysical memory. But it is because Virtual memory usage is more than expected for given physical memory.ui

Note : This is happening on Centos/RHEL 6 due to its aggressive allocation of virtual memory.spa

It can be resolved either by :.net

  1. Disable virtual memory usage check by setting yarn.nodemanager.vmem-check-enabled to false;
  2. Increase VM:PM ratio by setting yarn.nodemanager.vmem-pmem-ratio to some higher value(default value is 2.1).

Add following property in yarn-site.xml
     <property>
              <name>yarn.nodemanager.vmem-check-enabled</name>
              <value>false</value>
              <description>Whether virtual memory limits will be enforced for containers</description>
    </property>
              <property>
              <name>yarn.nodemanager.vmem-pmem-ratio</name>
              <value>4</value>
              <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
    </property>unix

      3.Then, restart yarn.

Reference:

http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/

http://blog.chinaunix.net/uid-28311809-id-4383551.html

http://stackoverflow.com/questions/21005643/container-is-running-beyond-memory-limits

相关文章
相关标签/搜索