[Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

[Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen

Jianneng Li-2
Hello everyone,

WholeStageCodegen generates code that appends results into a BufferedRowIterator, which keeps the results in an in-memory linked list. Long story short, this is a problem when multiple joins (i.e. BroadcastHashJoin) that can blow up get planned into the same WholeStageCodegen - results keep on accumulating in the linked list, and do not get consumed fast enough, eventually causing the JVM to run out of memory.

Does anyone else have experience with this problem? Some obvious solutions include making BufferedRowIterator spill the linked list, or make it bounded, but I'd imagine that this would have been done a long time ago if it were necessary.

Thanks,

Jianneng
Reply | Threaded
Open this post in threaded view
|

Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen

Liu Genie
I have encountered too many joins problem before. Since the joined dataframe is small enough, I convert join to udf operation, which is much faster and didn’t generate out of memory problem.

2020年2月25日 10:15,Jianneng Li <[hidden email]> 写道:

Hello everyone,

WholeStageCodegen generates code that appends results into a BufferedRowIterator, which keeps the results in an in-memory linked list. Long story short, this is a problem when multiple joins (i.e. BroadcastHashJoin) that can blow up get planned into the same WholeStageCodegen - results keep on accumulating in the linked list, and do not get consumed fast enough, eventually causing the JVM to run out of memory.

Does anyone else have experience with this problem? Some obvious solutions include making BufferedRowIterator spill the linked list, or make it bounded, but I'd imagine that this would have been done a long time ago if it were necessary.

Thanks,

Jianneng

Reply | Threaded
Open this post in threaded view
|

Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen

Jianneng Li-2
Thanks Genie. Unfortunately, the joins I'm doing in this case are large, so UDF likely won't work.

Jianneng

From: Liu Genie <[hidden email]>
Sent: Monday, February 24, 2020 6:39 PM
To: [hidden email] <[hidden email]>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen
 
I have encountered too many joins problem before. Since the joined dataframe is small enough, I convert join to udf operation, which is much faster and didn’t generate out of memory problem.

2020年2月25日 10:15,Jianneng Li <[hidden email]> 写道:

Hello everyone,

WholeStageCodegen generates code that appends results into a BufferedRowIterator, which keeps the results in an in-memory linked list. Long story short, this is a problem when multiple joins (i.e. BroadcastHashJoin) that can blow up get planned into the same WholeStageCodegen - results keep on accumulating in the linked list, and do not get consumed fast enough, eventually causing the JVM to run out of memory.

Does anyone else have experience with this problem? Some obvious solutions include making BufferedRowIterator spill the linked list, or make it bounded, but I'd imagine that this would have been done a long time ago if it were necessary.

Thanks,

Jianneng

Reply | Threaded
Open this post in threaded view
|

Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen

Yeikel
Can you please explain what you mean with that? How do you use a udf to replace a join? Thanks



---- On Mon, 24 Feb 2020 22:06:40 -0500 [hidden email] wrote ----

Thanks Genie. Unfortunately, the joins I'm doing in this case are large, so UDF likely won't work.

Jianneng

From: Liu Genie <[hidden email]>
Sent: Monday, February 24, 2020 6:39 PM
To: [hidden email] <[hidden email]>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen
 
I have encountered too many joins problem before. Since the joined dataframe is small enough, I convert join to udf operation, which is much faster and didn’t generate out of memory problem.

2020年2月25日 10:15,Jianneng Li <[hidden email]> 写道:

Hello everyone,

WholeStageCodegen generates code that appends results into a BufferedRowIterator, which keeps the results in an in-memory linked list. Long story short, this is a problem when multiple joins (i.e. BroadcastHashJoin) that can blow up get planned into the same WholeStageCodegen - results keep on accumulating in the linked list, and do not get consumed fast enough, eventually causing the JVM to run out of memory.

Does anyone else have experience with this problem? Some obvious solutions include making BufferedRowIterator spill the linked list, or make it bounded, but I'd imagine that this would have been done a long time ago if it were necessary.

Thanks,

Jianneng


Reply | Threaded
Open this post in threaded view
|

Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen

Jianneng Li-2
I could be wrong, but I'm guessing that it uses UDF as the build side of a hash join. So the hash table is inside the UDF, and the UDF is called to perform the join. There are limitations to this approach of course, you can't do all joins this way.

Best,

Jianneng

From: yeikel valdes <[hidden email]>
Sent: Tuesday, February 25, 2020 5:48 AM
To: Jianneng Li <[hidden email]>
Cc: [hidden email] <[hidden email]>; [hidden email] <[hidden email]>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen
 
Can you please explain what you mean with that? How do you use a udf to replace a join? Thanks



---- On Mon, 24 Feb 2020 22:06:40 -0500 [hidden email] wrote ----

Thanks Genie. Unfortunately, the joins I'm doing in this case are large, so UDF likely won't work.

Jianneng

From: Liu Genie <[hidden email]>
Sent: Monday, February 24, 2020 6:39 PM
To: [hidden email] <[hidden email]>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen
 
I have encountered too many joins problem before. Since the joined dataframe is small enough, I convert join to udf operation, which is much faster and didn’t generate out of memory problem.

2020年2月25日 10:15,Jianneng Li <[hidden email]> 写道:

Hello everyone,

WholeStageCodegen generates code that appends results into a BufferedRowIterator, which keeps the results in an in-memory linked list. Long story short, this is a problem when multiple joins (i.e. BroadcastHashJoin) that can blow up get planned into the same WholeStageCodegen - results keep on accumulating in the linked list, and do not get consumed fast enough, eventually causing the JVM to run out of memory.

Does anyone else have experience with this problem? Some obvious solutions include making BufferedRowIterator spill the linked list, or make it bounded, but I'd imagine that this would have been done a long time ago if it were necessary.

Thanks,

Jianneng


Reply | Threaded
Open this post in threaded view
|

Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen

Liu Genie
Exactly. My problem is a big dataframe joins a lot of small dataframes which I convert to Maps and then use udf apply on the big dataframe. (broadcast didn’t work in too many joins)

2020年2月26日 09:32,Jianneng Li <[hidden email]> 写道:

I could be wrong, but I'm guessing that it uses UDF as the build side of a hash join. So the hash table is inside the UDF, and the UDF is called to perform the join. There are limitations to this approach of course, you can't do all joins this way.

Best,

Jianneng

From: yeikel valdes <[hidden email]>
Sent: Tuesday, February 25, 2020 5:48 AM
To: Jianneng Li <[hidden email]>
Cc: [hidden email] <[hidden email]>; [hidden email] <[hidden email]>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen
 
Can you please explain what you mean with that? How do you use a udf to replace a join? Thanks



---- On Mon, 24 Feb 2020 22:06:40 -0500 [hidden email] wrote ----

Thanks Genie. Unfortunately, the joins I'm doing in this case are large, so UDF likely won't work.

Jianneng

From: Liu Genie <[hidden email]>
Sent: Monday, February 24, 2020 6:39 PM
To: [hidden email] <[hidden email]>
Subject: Re: [Spark SQL] Memory problems with packing too many joins into the same WholeStageCodegen
 
I have encountered too many joins problem before. Since the joined dataframe is small enough, I convert join to udf operation, which is much faster and didn’t generate out of memory problem.

2020年2月25日 10:15,Jianneng Li <[hidden email]> 写道:

Hello everyone,

WholeStageCodegen generates code that appends results into a BufferedRowIterator, which keeps the results in an in-memory linked list. Long story short, this is a problem when multiple joins (i.e. BroadcastHashJoin) that can blow up get planned into the same WholeStageCodegen - results keep on accumulating in the linked list, and do not get consumed fast enough, eventually causing the JVM to run out of memory.

Does anyone else have experience with this problem? Some obvious solutions include making BufferedRowIterator spill the linked list, or make it bounded, but I'd imagine that this would have been done a long time ago if it were necessary.

Thanks,

Jianneng