SPARK_JAVA_OPTS not picked up by the application

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

SPARK_JAVA_OPTS not picked up by the application

Linlin

Hi,

I have a java option (-Xss) setting specified in SPARK_JAVA_OPTS in spark-env.sh,  noticed after stop/restart the spark cluster, the master/worker daemon has the setting being applied, but this setting is not being propagated to the executor, my application continue behave the same. I am not sure if there is a way to specify it through SparkConf? like SparkConf.set(), and what is the correct way of setting this up for a particular spark application.

Thank you!
Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

hequn8128
have your send spark-env.sh to the slave nodes ?


2014-03-11 6:47 GMT+08:00 Linlin <[hidden email]>:

Hi,

I have a java option (-Xss) setting specified in SPARK_JAVA_OPTS in
spark-env.sh,  noticed after stop/restart the spark cluster, the
master/worker daemon has the setting being applied, but this setting is not
being propagated to the executor, my application continue behave the same. I
am not sure if there is a way to specify it through SparkConf? like
SparkConf.set(), and what is the correct way of setting this up for a
particular spark application.

Thank you!




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-JAVA-OPTS-not-picked-up-by-the-application-tp2483.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

Chen Jingci
In reply to this post by Linlin
The properties in spark-env.sh are machine-specific. so need to specify in you worker as well. I guess you ask is the System.setproperty(). you can call it before you initialize your sparkcontext.

Best Regards,
Chen Jingci


On Tue, Mar 11, 2014 at 6:47 AM, Linlin <[hidden email]> wrote:

Hi,

I have a java option (-Xss) setting specified in SPARK_JAVA_OPTS in
spark-env.sh,  noticed after stop/restart the spark cluster, the
master/worker daemon has the setting being applied, but this setting is not
being propagated to the executor, my application continue behave the same. I
am not sure if there is a way to specify it through SparkConf? like
SparkConf.set(), and what is the correct way of setting this up for a
particular spark application.

Thank you!




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-JAVA-OPTS-not-picked-up-by-the-application-tp2483.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

Linlin
In reply to this post by hequn8128
my cluster only has 1 node (master/worker).
Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

Aaron Davidson
It's interesting that the setting was applied to the master/worker processes, as those have been using a different environment variable called SPARK_DAEMON_JAVA_OPTS since around spark 0.8.0. Is it being set in the driver?


On Mon, Mar 10, 2014 at 9:15 PM, Linlin <[hidden email]> wrote:
my cluster only has 1 node (master/worker).



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-JAVA-OPTS-not-picked-up-by-the-application-tp2483p2506.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

Linlin
In reply to this post by Chen Jingci
Thanks!

since my worker is on the same node, -Xss JVM option is for setting thread maximum stack size, my worker does show this option now.  now I realized I accidently run the the app run in local mode as I didn't give the master URL when initializing the spark context,  for local mode, how to pass jvm option to the app?


hadoop   17315     1  0 14:56 ?        00:02:12 /home/hadoop/ibm-java-x86_64-60/bin/java -cp :/home/hadoop/spark-0.9.0-incubating/conf:/home/hadoop/spark-0.9.0-incubating/assembly/target/scala-2.10/spark-assembly-0.9.0-incubating-hadoop1.2.1.jar -Dspark.akka.logLifecycleEvents=true -Djava.library.path= -Xms512m -Xmx512m -Xss1024k org.apache.spark.deploy.worker.Worker spark://hdtest021.svl.ibm.com:7077

Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

Linlin
In reply to this post by Aaron Davidson

Thanks!

so SPARK_DAEMON_JAVA_OPTS is for worker? and SPARK_JAVA_OPTS is for master?   I only set SPARK_JAVA_OPTS in spark-env.sh, and the JVM opt is applied to both master/worker daemon.
Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

Chen Jingci
In reply to this post by Linlin
I haven't tried it, but I think you still can use the system.setproperty to set the property. or if you run the application with sbt, I think you also can set the javaOptions in sbt.
is that working for you?

Thanks

Best Regards,
Chen Jingci


On Tue, Mar 11, 2014 at 1:15 PM, Linlin <[hidden email]> wrote:
Thanks!

since my worker is on the same node, -Xss JVM option is for setting thread
maximum stack size, my worker does show this option now.  now I realized I
accidently run the the app run in local mode as I didn't give the master URL
when initializing the spark context,  for local mode, how to pass jvm option
to the app?


hadoop   17315     1  0 14:56 ?        00:02:12
/home/hadoop/ibm-java-x86_64-60/bin/java -cp
:/home/hadoop/spark-0.9.0-incubating/conf:/home/hadoop/spark-0.9.0-incubating/assembly/target/scala-2.10/spark-assembly-0.9.0-incubating-hadoop1.2.1.jar
-Dspark.akka.logLifecycleEvents=true -Djava.library.path= -Xms512m -Xmx512m
-Xss1024k org.apache.spark.deploy.worker.Worker
spark://hdtest021.svl.ibm.com:7077





--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-JAVA-OPTS-not-picked-up-by-the-application-tp2483p2510.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

Linlin
Thank you!

I have set the jvm option in sbt for local mode, that works!

not sure how to specify it through System.setProperty(), this is jvm command line option only?

Thank you for your help!
Reply | Threaded
Open this post in threaded view
|

Re: SPARK_JAVA_OPTS not picked up by the application

Chen Jingci
Some properties can be set in System.setProperty(). Like the -Dfile.encoding, can be set as ("file.encoding", "utf-8"). but some cannot, like the heap size, as it is too late to set the heap size. And in the new 0.9 version, I think you can use sparkconf.set() instead of System.setProperty(). It seems the same.

Thanks.

Best Regards,
Chen Jingci


On Thu, Mar 13, 2014 at 1:22 AM, Linlin <[hidden email]> wrote:
Thank you!

I have set the jvm option in sbt for local mode, that works!

not sure how to specify it through System.setProperty(), this is jvm command
line option only?

Thank you for your help!




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-JAVA-OPTS-not-picked-up-by-the-application-tp2483p2607.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.