folks,
We recently upgraded to 2.3.1 and we started seeing that, the spark jobs
leaves _temporary directory in the s3 even though write to s3 already
finish. It do not cleanup the temporary directory.
Hadoop version 2.8. is there a way to control it?
--
Sent from:
http://apache-spark-user-list.1001560.n3.nabble.com/---------------------------------------------------------------------
To unsubscribe e-mail:
[hidden email]