[Spark Structured Streaming] Running out of disk quota due to /work/tmp

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[Spark Structured Streaming] Running out of disk quota due to /work/tmp

subramgr
We have a Spark Structured Streaming job which runs out of disk quota after
some days.

The primary reason is there are bunch of empty folders that are getting
created in the /work/tmp directory.

Any idea how to prune them?



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]