Out of memory issue

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Out of memory issue

Amit Sharma-2
Hi , I am using 16 nodes spark cluster with below config
1. Executor memory  8 GB
2. 5 cores per executor
3. Driver memory 12 GB.


We have streaming job. We do not see problem but sometimes we get exception executor-1 heap memory issue. I am not understanding if data size is same and this job receive a request and process it but suddenly it’s start giving out of memory error . It will throw exception for 1 executor then throw for other executor also and it stop processing the request.

Thanks
Amit
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory issue

Amit Sharma-2
Can you please help.


Thanks
Amit

On Sun, Nov 8, 2020 at 1:35 PM Amit Sharma <[hidden email]> wrote:
Hi , I am using 16 nodes spark cluster with below config
1. Executor memory  8 GB
2. 5 cores per executor
3. Driver memory 12 GB.


We have streaming job. We do not see problem but sometimes we get exception executor-1 heap memory issue. I am not understanding if data size is same and this job receive a request and process it but suddenly it’s start giving out of memory error . It will throw exception for 1 executor then throw for other executor also and it stop processing the request.

Thanks
Amit
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory issue

Amit Sharma-2
In reply to this post by Amit Sharma-2
Please find below the exact exception

Exception in thread "streaming-job-executor-3" java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOf(Arrays.java:3332)
        at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
        at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
        at java.lang.StringBuilder.append(StringBuilder.java:136)
        at scala.StringContext.standardInterpolator(StringContext.scala:126)
        at scala.StringContext.s(StringContext.scala:95)
        at sparkStreaming.TRReview.getTRReviews(TRReview.scala:307)
        at sparkStreaming.KafkaListener$$anonfun$1$$anonfun$apply$1$$anonfun$3.apply(KafkaListener.scala:154)
        at sparkStreaming.KafkaListener$$anonfun$1$$anonfun$apply$1$$anonfun$3.apply(KafkaListener.scala:138)
        at scala.util.Success$$anonfun$map$1.apply(Try.scala:237)
        at scala.util.Try$.apply(Try.scala:192)
        at scala.util.Success.map(Try.scala:237)

On Sun, Nov 8, 2020 at 1:35 PM Amit Sharma <[hidden email]> wrote:
Hi , I am using 16 nodes spark cluster with below config
1. Executor memory  8 GB
2. 5 cores per executor
3. Driver memory 12 GB.


We have streaming job. We do not see problem but sometimes we get exception executor-1 heap memory issue. I am not understanding if data size is same and this job receive a request and process it but suddenly it’s start giving out of memory error . It will throw exception for 1 executor then throw for other executor also and it stop processing the request.

Thanks
Amit
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory issue

Amit Sharma-2
please help.


Thanks
Amit

On Mon, Nov 9, 2020 at 4:18 PM Amit Sharma <[hidden email]> wrote:
Please find below the exact exception

Exception in thread "streaming-job-executor-3" java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOf(Arrays.java:3332)
        at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
        at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
        at java.lang.StringBuilder.append(StringBuilder.java:136)
        at scala.StringContext.standardInterpolator(StringContext.scala:126)
        at scala.StringContext.s(StringContext.scala:95)
        at sparkStreaming.TRReview.getTRReviews(TRReview.scala:307)
        at sparkStreaming.KafkaListener$$anonfun$1$$anonfun$apply$1$$anonfun$3.apply(KafkaListener.scala:154)
        at sparkStreaming.KafkaListener$$anonfun$1$$anonfun$apply$1$$anonfun$3.apply(KafkaListener.scala:138)
        at scala.util.Success$$anonfun$map$1.apply(Try.scala:237)
        at scala.util.Try$.apply(Try.scala:192)
        at scala.util.Success.map(Try.scala:237)

On Sun, Nov 8, 2020 at 1:35 PM Amit Sharma <[hidden email]> wrote:
Hi , I am using 16 nodes spark cluster with below config
1. Executor memory  8 GB
2. 5 cores per executor
3. Driver memory 12 GB.


We have streaming job. We do not see problem but sometimes we get exception executor-1 heap memory issue. I am not understanding if data size is same and this job receive a request and process it but suddenly it’s start giving out of memory error . It will throw exception for 1 executor then throw for other executor also and it stop processing the request.

Thanks
Amit
Reply | Threaded
Open this post in threaded view
|

unsubscribe

youso b

On Fri, Nov 20, 2020, 8:25 AM Amit Sharma <[hidden email]> wrote:
please help.


Thanks
Amit

On Mon, Nov 9, 2020 at 4:18 PM Amit Sharma <[hidden email]> wrote:
Please find below the exact exception

Exception in thread "streaming-job-executor-3" java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOf(Arrays.java:3332)
        at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
        at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
        at java.lang.StringBuilder.append(StringBuilder.java:136)
        at scala.StringContext.standardInterpolator(StringContext.scala:126)
        at scala.StringContext.s(StringContext.scala:95)
        at sparkStreaming.TRReview.getTRReviews(TRReview.scala:307)
        at sparkStreaming.KafkaListener$$anonfun$1$$anonfun$apply$1$$anonfun$3.apply(KafkaListener.scala:154)
        at sparkStreaming.KafkaListener$$anonfun$1$$anonfun$apply$1$$anonfun$3.apply(KafkaListener.scala:138)
        at scala.util.Success$$anonfun$map$1.apply(Try.scala:237)
        at scala.util.Try$.apply(Try.scala:192)
        at scala.util.Success.map(Try.scala:237)

On Sun, Nov 8, 2020 at 1:35 PM Amit Sharma <[hidden email]> wrote:
Hi , I am using 16 nodes spark cluster with below config
1. Executor memory  8 GB
2. 5 cores per executor
3. Driver memory 12 GB.


We have streaming job. We do not see problem but sometimes we get exception executor-1 heap memory issue. I am not understanding if data size is same and this job receive a request and process it but suddenly it’s start giving out of memory error . It will throw exception for 1 executor then throw for other executor also and it stop processing the request.

Thanks
Amit
Reply | Threaded
Open this post in threaded view
|

Re: Out of memory issue

Russell Spitzer
In reply to this post by Amit Sharma-2
Well if the system doesn't change, then the data must be different. The exact exception probably won't be helpful since it only tells us the last allocation that failed. My guess is that your ingestion changed and there is either now slightly more data than previously or it's skewed differently. One of the two things is probably happening and is overloading one executor. 

The solution is to increase executor heap.

On Fri, Nov 20, 2020 at 8:25 AM Amit Sharma <[hidden email]> wrote:
please help.


Thanks
Amit

On Mon, Nov 9, 2020 at 4:18 PM Amit Sharma <[hidden email]> wrote:
Please find below the exact exception

Exception in thread "streaming-job-executor-3" java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOf(Arrays.java:3332)
        at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
        at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
        at java.lang.StringBuilder.append(StringBuilder.java:136)
        at scala.StringContext.standardInterpolator(StringContext.scala:126)
        at scala.StringContext.s(StringContext.scala:95)
        at sparkStreaming.TRReview.getTRReviews(TRReview.scala:307)
        at sparkStreaming.KafkaListener$$anonfun$1$$anonfun$apply$1$$anonfun$3.apply(KafkaListener.scala:154)
        at sparkStreaming.KafkaListener$$anonfun$1$$anonfun$apply$1$$anonfun$3.apply(KafkaListener.scala:138)
        at scala.util.Success$$anonfun$map$1.apply(Try.scala:237)
        at scala.util.Try$.apply(Try.scala:192)
        at scala.util.Success.map(Try.scala:237)

On Sun, Nov 8, 2020 at 1:35 PM Amit Sharma <[hidden email]> wrote:
Hi , I am using 16 nodes spark cluster with below config
1. Executor memory  8 GB
2. 5 cores per executor
3. Driver memory 12 GB.


We have streaming job. We do not see problem but sometimes we get exception executor-1 heap memory issue. I am not understanding if data size is same and this job receive a request and process it but suddenly it’s start giving out of memory error . It will throw exception for 1 executor then throw for other executor also and it stop processing the request.

Thanks
Amit