Job spark blocked and runs indefinitely

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Job spark blocked and runs indefinitely

amine_901
We encounter a problem on a Spark job 1.6(on yarn) that never ends, whene
several jobs launched simultaneously.
We found that by launching the job spark in yarn-client mode we do not have
this problem, unlike launching it in yarn-cluster mode.
it could be a trail to find the cause.

we changed the code to add a sparkContext.stop ()
Indeed, the SparkContext was created (val sparkContext = createSparkContext)
but not stopped. this solution has allowed us to decrease the number of jobs
that remains blocked but nevertheless we still have some jobs blocked.

by analyzing the logs we have found this log that repeats without stopping:
/17/09/29 11:04:37 DEBUG SparkEventPublisher: Enqueue
SparkListenerExecutorMetricsUpdate(1,WrappedArray())
17/09/29 11:04:41 DEBUG ApplicationMaster: Sending progress
17/09/29 11:04:41 DEBUG ApplicationMaster: Number of pending allocations is
0. Sleeping for 5000. /

Does someone have an idea about this issue ?
Thank you in advance



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Job spark blocked and runs indefinitely

sebastian.piu

We do have this issue randomly too, so interested in hearing if someone was able to get to the bottom of it


On Wed, 11 Oct 2017, 13:40 amine_901, <[hidden email]> wrote:
We encounter a problem on a Spark job 1.6(on yarn) that never ends, whene
several jobs launched simultaneously.
We found that by launching the job spark in yarn-client mode we do not have
this problem, unlike launching it in yarn-cluster mode.
it could be a trail to find the cause.

we changed the code to add a sparkContext.stop ()
Indeed, the SparkContext was created (val sparkContext = createSparkContext)
but not stopped. this solution has allowed us to decrease the number of jobs
that remains blocked but nevertheless we still have some jobs blocked.

by analyzing the logs we have found this log that repeats without stopping:
/17/09/29 11:04:37 DEBUG SparkEventPublisher: Enqueue
SparkListenerExecutorMetricsUpdate(1,WrappedArray())
17/09/29 11:04:41 DEBUG ApplicationMaster: Sending progress
17/09/29 11:04:41 DEBUG ApplicationMaster: Number of pending allocations is
0. Sleeping for 5000. /

Does someone have an idea about this issue ?
Thank you in advance



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Job spark blocked and runs indefinitely

amine_901
it seems that the job block whene we call newAPIHadoopRDD to get data from Hbase. it may be the issue !! 
is there another api to load date from hbase ?


Sent with Mailtrack

2017-10-11 14:45 GMT+02:00 Sebastian Piu <[hidden email]>:

We do have this issue randomly too, so interested in hearing if someone was able to get to the bottom of it


On Wed, 11 Oct 2017, 13:40 amine_901, <[hidden email]> wrote:
We encounter a problem on a Spark job 1.6(on yarn) that never ends, whene
several jobs launched simultaneously.
We found that by launching the job spark in yarn-client mode we do not have
this problem, unlike launching it in yarn-cluster mode.
it could be a trail to find the cause.

we changed the code to add a sparkContext.stop ()
Indeed, the SparkContext was created (val sparkContext = createSparkContext)
but not stopped. this solution has allowed us to decrease the number of jobs
that remains blocked but nevertheless we still have some jobs blocked.

by analyzing the logs we have found this log that repeats without stopping:
/17/09/29 11:04:37 DEBUG SparkEventPublisher: Enqueue
SparkListenerExecutorMetricsUpdate(1,WrappedArray())
17/09/29 11:04:41 DEBUG ApplicationMaster: Sending progress
17/09/29 11:04:41 DEBUG ApplicationMaster: Number of pending allocations is
0. Sleeping for 5000. /

Does someone have an idea about this issue ?
Thank you in advance



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
CHERIFI Mohamed Amine
Développeur Big data/Data scientist
07 81 65 17 03
Reply | Threaded
Open this post in threaded view
|

Re: Job spark blocked and runs indefinitely

Timur Shenkao
HBase has its own Java API and Scala API: you can use what you like.
Btw, which Spark-Hbase connector do you use? Cloudera or Hortonworks?

On Wed, Oct 11, 2017 at 3:01 PM, Amine CHERIFI <[hidden email]> wrote:
it seems that the job block whene we call newAPIHadoopRDD to get data from Hbase. it may be the issue !! 
is there another api to load date from hbase ?


Sent with Mailtrack

2017-10-11 14:45 GMT+02:00 Sebastian Piu <[hidden email]>:

We do have this issue randomly too, so interested in hearing if someone was able to get to the bottom of it


On Wed, 11 Oct 2017, 13:40 amine_901, <[hidden email]> wrote:
We encounter a problem on a Spark job 1.6(on yarn) that never ends, whene
several jobs launched simultaneously.
We found that by launching the job spark in yarn-client mode we do not have
this problem, unlike launching it in yarn-cluster mode.
it could be a trail to find the cause.

we changed the code to add a sparkContext.stop ()
Indeed, the SparkContext was created (val sparkContext = createSparkContext)
but not stopped. this solution has allowed us to decrease the number of jobs
that remains blocked but nevertheless we still have some jobs blocked.

by analyzing the logs we have found this log that repeats without stopping:
/17/09/29 11:04:37 DEBUG SparkEventPublisher: Enqueue
SparkListenerExecutorMetricsUpdate(1,WrappedArray())
17/09/29 11:04:41 DEBUG ApplicationMaster: Sending progress
17/09/29 11:04:41 DEBUG ApplicationMaster: Number of pending allocations is
0. Sleeping for 5000. /

Does someone have an idea about this issue ?
Thank you in advance



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]




--
CHERIFI Mohamed Amine
Développeur Big data/Data scientist
07 81 65 17 03