Quantcast

[SparkSQL] too many open files although ulimit set to 1048576

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

[SparkSQL] too many open files although ulimit set to 1048576

elementpps
This post has NOT been accepted by the mailing list yet.
This post was updated on .
SparkSQL

data: 150,000,000 inner join 100,000,000

Spark version: 2.0.2

although ulimit -a is already set to 1048576(set more may cause error)

but still throw the error

java.io.FileNotFoundException: /tmp/spark-c32ce764-44ee-457c-96ed-44c77f829845/executor-8ba31059-3d65-47b3-97d5-73da78bb9e16/blockmgr-da6afd20-0635-42e6-af6c-f70a1dec738f/01/temp_shuffle_86d4051d-9dfa-4517-bc95-6c323ae699fa (too many open files)
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: [SparkSQL] too many open files although ulimit set to 1048576

darin
This post has NOT been accepted by the mailing list yet.
I think your sets not works
try add `ulimit -n 10240 ` in spark-env.sh
Loading...