Yarn number of containers

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Yarn number of containers

jamborta
This post was updated on .
Hi all,

I am running spark with the default settings in yarn client mode. For some reason yarn always allocates three containers to the application (wondering where it is set?), and only uses two of them.

Also the cpus on the cluster never go over 50%, I turned off the fair scheduler and set spark.cores.max high also allocated plenty of memory per worker.

Is there some additional settings I am missing?

thanks,
Reply | Threaded
Open this post in threaded view
|

Re: Yarn number of containers

Marcelo Vanzin
On Thu, Sep 25, 2014 at 8:55 AM, jamborta <[hidden email]> wrote:
> I am running spark with the default settings in yarn client mode. For some
> reason yarn always allocates three containers to the application (wondering
> where it is set?), and only uses two of them.

The default number of executors in Yarn mode is 2; so you have 2
executors + the application master, so 3 containers.

> Also the cpus on the cluster never go over 50%, I turned off the fair
> scheduler and set high spark.cores.max. Is there some additional settings I
> am missing?

You probably need to request more cores (--executor-cores). Don't
remember if that is respected in Yarn, but should be.

--
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Yarn number of containers

jamborta
Thank you.

Where is the number of containers set?

On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin <[hidden email]> wrote:

> On Thu, Sep 25, 2014 at 8:55 AM, jamborta <[hidden email]> wrote:
>> I am running spark with the default settings in yarn client mode. For some
>> reason yarn always allocates three containers to the application (wondering
>> where it is set?), and only uses two of them.
>
> The default number of executors in Yarn mode is 2; so you have 2
> executors + the application master, so 3 containers.
>
>> Also the cpus on the cluster never go over 50%, I turned off the fair
>> scheduler and set high spark.cores.max. Is there some additional settings I
>> am missing?
>
> You probably need to request more cores (--executor-cores). Don't
> remember if that is respected in Yarn, but should be.
>
> --
> Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Yarn number of containers

Marcelo Vanzin
From spark-submit --help:

 YARN-only:
  --executor-cores NUM        Number of cores per executor (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM         Number of executors to launch (Default: 2).
  --archives ARCHIVES         Comma separated list of archives to be
extracted into the
                              working directory of each executor.

On Thu, Sep 25, 2014 at 2:20 PM, Tamas Jambor <[hidden email]> wrote:

> Thank you.
>
> Where is the number of containers set?
>
> On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin <[hidden email]> wrote:
>> On Thu, Sep 25, 2014 at 8:55 AM, jamborta <[hidden email]> wrote:
>>> I am running spark with the default settings in yarn client mode. For some
>>> reason yarn always allocates three containers to the application (wondering
>>> where it is set?), and only uses two of them.
>>
>> The default number of executors in Yarn mode is 2; so you have 2
>> executors + the application master, so 3 containers.
>>
>>> Also the cpus on the cluster never go over 50%, I turned off the fair
>>> scheduler and set high spark.cores.max. Is there some additional settings I
>>> am missing?
>>
>> You probably need to request more cores (--executor-cores). Don't
>> remember if that is respected in Yarn, but should be.
>>
>> --
>> Marcelo



--
Marcelo

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Yarn number of containers

jamborta
thanks.


On Thu, Sep 25, 2014 at 10:25 PM, Marcelo Vanzin [via Apache Spark
User List] <[hidden email]> wrote:

> From spark-submit --help:
>
>  YARN-only:
>   --executor-cores NUM        Number of cores per executor (Default: 1).
>   --queue QUEUE_NAME          The YARN queue to submit to (Default:
> "default").
>   --num-executors NUM         Number of executors to launch (Default: 2).
>   --archives ARCHIVES         Comma separated list of archives to be
> extracted into the
>                               working directory of each executor.
>
> On Thu, Sep 25, 2014 at 2:20 PM, Tamas Jambor <[hidden email]> wrote:
>
>> Thank you.
>>
>> Where is the number of containers set?
>>
>> On Thu, Sep 25, 2014 at 7:17 PM, Marcelo Vanzin <[hidden email]> wrote:
>>> On Thu, Sep 25, 2014 at 8:55 AM, jamborta <[hidden email]> wrote:
>>>> I am running spark with the default settings in yarn client mode. For
>>>> some
>>>> reason yarn always allocates three containers to the application
>>>> (wondering
>>>> where it is set?), and only uses two of them.
>>>
>>> The default number of executors in Yarn mode is 2; so you have 2
>>> executors + the application master, so 3 containers.
>>>
>>>> Also the cpus on the cluster never go over 50%, I turned off the fair
>>>> scheduler and set high spark.cores.max. Is there some additional
>>>> settings I
>>>> am missing?
>>>
>>> You probably need to request more cores (--executor-cores). Don't
>>> remember if that is respected in Yarn, but should be.
>>>
>>> --
>>> Marcelo
>
>
>
> --
> Marcelo
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
>
> ________________________________
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-spark-user-list.1001560.n3.nabble.com/Yarn-number-of-containers-tp15148p15177.html
> To unsubscribe from Yarn number of containers, click here.
> NAML