The parameter spark.yarn.executor.memoryOverhead

Previous Topic Next Topic
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

The parameter spark.yarn.executor.memoryOverhead

Ashok Kumar-2
Hi Gurus,

The parameter spark.yarn.executor.memoryOverhead is explained as below:

executorMemory * 0.10, with minimum of 384
The amount of off-heap memory (in megabytes) to be allocated per executor. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%).

So does that mean that for executor of 10GB this should be ideally set to ~ 10% = 1GB?                                                                                                                                                                                        
What would happen if we set it higher to say 30% ~ 3GB.
What is this memory is exactly used for (as opposed to memory allocated to the executor)?

Thanking you