Spark SQL reduce number of java threads

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Spark SQL reduce number of java threads

Wanda Hawk-2
Hello

I am trying to reduce the number of java threads (about 80 on my system) to as few as there can be.
What settings can be done in spark-1.1.0/conf/spark-env.sh ? (or other places as well)
I am also using hadoop for storing data on hdfs

Thank you,
Wanda
Reply | Threaded
Open this post in threaded view
|

Re: Spark SQL reduce number of java threads

Prashant Sharma
What is the motivation behind this ?

You can start with master as local[NO_OF_THREADS]. Reducing the threads at all other places can have unexpected results. Take a look at this. http://spark.apache.org/docs/latest/configuration.html.

Prashant Sharma



On Tue, Oct 28, 2014 at 2:08 PM, Wanda Hawk <[hidden email]> wrote:
Hello

I am trying to reduce the number of java threads (about 80 on my system) to as few as there can be.
What settings can be done in spark-1.1.0/conf/spark-env.sh ? (or other places as well)
I am also using hadoop for storing data on hdfs

Thank you,
Wanda

Reply | Threaded
Open this post in threaded view
|

Re: Spark SQL reduce number of java threads

Wanda Hawk-2
I am trying to get a software trace and I need to get the number of active threads as low as I can in order to inspect the "active" part of the workload


From: Prashant Sharma <[hidden email]>
To: Wanda Hawk <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Sent: Tuesday, October 28, 2014 11:17 AM
Subject: Re: Spark SQL reduce number of java threads

What is the motivation behind this ?

You can start with master as local[NO_OF_THREADS]. Reducing the threads at all other places can have unexpected results. Take a look at this. http://spark.apache.org/docs/latest/configuration.html.

Prashant Sharma





On Tue, Oct 28, 2014 at 2:08 PM, Wanda Hawk <[hidden email]> wrote:
Hello

I am trying to reduce the number of java threads (about 80 on my system) to as few as there can be.
What settings can be done in spark-1.1.0/conf/spark-env.sh ? (or other places as well)
I am also using hadoop for storing data on hdfs

Thank you,
Wanda