Spark 2.3.1 leaves _temporary dir back on s3 even after write to s3 is done.
We recently upgraded to 2.3.1 and we started seeing that, the spark jobs
leaves _temporary directory in the s3 even though write to s3 already
finish. It do not cleanup the temporary directory.
Hadoop version 2.8. is there a way to control it?