[Spark-Mesos] Detectability for resource starvation for Spark on Mesos
This post has NOT been accepted by the mailing list yet.
We currently use Mesos 1.1.0 for our Spark cluster in coarse-grained mode. We had a production issue recently wherein we had our spark frameworks accept resources from the Mesos master, so executors were started and spark driver was aware of them, but the driver didn’t plan any task and nothing was happening for a long time because it didn't meet a minimum registered resources threshold. and the cluster is usually under-provisioned in order because not all the jobs need to run at the same time. These held resources were never offered back to the master for re-allocation leading to the entire cluster to a halt until we had to manually intervene.
Using DRF for mesos and FIFO for Spark and the cluster is usually under-provisioned. At any point of time there could be 10-15 spark frameworks running on Mesos on the under-provisioned cluster
Is there an easy way to have better detectability for a scenario where the individual Spark frameworks hold onto resources but never launch any tasks or have these frameworks release these resources after a fixed amount of time.