Is there is a way to specify rack awareness in Spark? For example, if I want
to use AggregatebyKey, is there a way to let Spark aggregate within the same
rack first, then aggregate between rack? I'm interested in this because I am
trying to figure whether there is a way to deal with limp inter-rack
I'm have searched through mailing list and StackOverflow, but all of them
are talking about rack awareness in HDFS instead of Spark.
Thanks a lot!