in the documentation for running spark on yarn it states:
"We do not requesting container resources based on the number of cores.
Thus the numbers of cores given via command line arguments cannot be
can someone explain this a bit more? is it simply a reflection of the fact that yarn could come back with the requested number of containers, but those could have less cores than requested? (and if so, spark will not ask for more containers to get the required number of cores)