Spark Adaptive configuration

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Spark Adaptive configuration

Tzahi File
Hi, 

I saw that spark has an option to adapt the join and shuffle configuration. For example: "spark.sql.adaptive.shuffle.targetPostShuffleInputSize"

I wanted to know if you had an experience with such configuration, how it changed the performance? 

Another question is whether along Spark SQL query execution, there is an option to dynamically change the shuffle partition config?   

Thanks,
Tzahi