Rack Awareness in Spark

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view

Rack Awareness in Spark

Hello everyone,
Is there is a way to specify rack awareness in Spark? For example, if I want
to use AggregatebyKey, is there a way to let Spark aggregate within the same
rack first, then aggregate between rack? I'm interested in this because I am
trying to figure whether there is a way to deal with limp inter-rack
I'm have searched through mailing list and StackOverflow, but all of them
are talking about rack awareness in HDFS instead of Spark.
Thanks a lot!


Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

To unsubscribe e-mail: [hidden email]