I have scenario where kafka is going to be input source for data. So how can I deploy my application which is having all logic for transforming kafka input stream.
But I am little bit confused about usage of spark in cluster mode. After running spark in cluster mode, I want to deploy my application on cluster so for that why do I need to run one more java application running forever ? Is it possible to deploy my application jar on the cluster and running only master/slave processes ? I am not sure if I make any sense.