[Worker Crashing] OutOfMemoryError: GC overhead limit execeeded

Previous Topic Next Topic
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

[Worker Crashing] OutOfMemoryError: GC overhead limit execeeded

This post has NOT been accepted by the mailing list yet.
Spark version: 1.6.2
Hadoop: 2.6.0

All VMS are deployed on AWS.
1 Master (t2.large)
1 Secondary Master (t2.large)
5 Workers (m4.xlarge)
Zookeeper (t2.large)

Recently, 2 of our workers went down with out of memory exception.
java.lang.OutOfMemoryError: GC overhead limit exceeded (max heap: 1024 MB)

Both of these worker processes were in hanged state. We restarted them to bring them back to normal state.

Here is the complete exception

Master's spark-default.conf file:

Master's spark-env.sh

Slave's spark-default.conf file:

So, what could be the reason of our workers crashing due to OutOfMemory ? How can we avoid that in future.