SparkPi example exceeds virtual memory limits (yarn)

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

SparkPi example exceeds virtual memory limits (yarn)

Eric Kimbrel-2
i am running hadoop version 2.2.0-chd5.0.0-beta-1 with spark 0.8.1.

When trying to run the example i get an error for virtual memory limits being exceeded.

HADOOP_CONF_DIR=/etc/hadoop/conf \
SPARK_JAR=/usr/lib/spark/assembly/target/scala-2.9.3/spark-assembly-0.8.1-incubating-hadoop2.2.0-cdh5.0.0-beta-1.jar \
 /usr/lib/spark/spark-class org.apache.spark.deploy.yarn.Client \
 --jar /usr/lib/spark/examples/target/scala-2.9.3/spark-examples-assembly-0.8.1-incubating.jar \
 --class org.apache.spark.examples.SparkPi \
 --args yarn-standalone \
 --num-workers 3 \
 --master-memory 4g \
 --worker-memory 2g \
 --worker-cores 1

is running beyond virtual memory limits. Current usage: 434.2 MB of 8 GB physical memory used; 65.6 GB of 16.8 GB virtual memory used. Killing container.

Several things don’t make sense to me here.  First why do i have 8GB of physical memory when 4g were specified for the master and 2g for workers?  Second why is the SparkPi example consuming 65G of virtual memory?