Using Spark as a fail-over platform for Java app

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Using Spark as a fail-over platform for Java app

Sergey Oboguev
I have an existing plain-Java (non-Spark) application that needs to run in a fault-tolerant way, i.e. if the node crashes then the application is restarted on another node, and if the application crashes because of internal fault, the application is restarted too.

Normally I would run it in a Kubernetes, but in this specific case Kubernetes is unavailable because of organizational/bureaucratic issues, and the only execution platform available in the domain is Spark.

Is it possible to wrap the application into a Spark-based launcher that will take care of executing the application and restarts?

Execution must be in a separate JVM, apart from other apps.

And for optimum performance, the application also needs to be assigned guaranteed resources, i.e. the number of cores and amount of RAM required for it, so it would be great if the launcher could take care of this too.

Thanks for advice.
Reply | Threaded
Open this post in threaded view
|

Re: Using Spark as a fail-over platform for Java app

Lalwani, Jayesh

Can I cut a steak with a hammer? Sure you can, but the steak would taste awful

 

Do you have organizational/bureaucratic issues with using a Load Balancer? Because that’s what you really need. Run your application on multiple nodes with a load balancer in front. When a node crashes, the load balancer will shift the traffic to the healthy node until the crashed node recovers.

 

From: Sergey Oboguev <[hidden email]>
Date: Friday, March 12, 2021 at 2:53 PM
To: User <[hidden email]>
Subject: [EXTERNAL] Using Spark as a fail-over platform for Java app

 

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.

 

I have an existing plain-Java (non-Spark) application that needs to run in a fault-tolerant way, i.e. if the node crashes then the application is restarted on another node, and if the application crashes because of internal fault, the application is restarted too.

Normally I would run it in a Kubernetes, but in this specific case Kubernetes is unavailable because of organizational/bureaucratic issues, and the only execution platform available in the domain is Spark.

Is it possible to wrap the application into a Spark-based launcher that will take care of executing the application and restarts?

Execution must be in a separate JVM, apart from other apps.

And for optimum performance, the application also needs to be assigned guaranteed resources, i.e. the number of cores and amount of RAM required for it, so it would be great if the launcher could take care of this too.

Thanks for advice.

Reply | Threaded
Open this post in threaded view
|

Re: Using Spark as a fail-over platform for Java app

Jungtaek Lim-2
That's what resource managers provide to you. So you can code and deal with resource managers, but I assume you're finding ways to not deal with resource managers directly and let Spark do it instead.

I admit I have no experience (I did the similar with Apache Storm on standalone setup 5+ years ago), but the question can be simply changed as "making driver fault-tolerant" as your app logic can run under driver even if you don't do any calculation with Spark. And there seems to be lots of answers in google for the new question, including the old one; https://stackoverflow.com/questions/26618464/what-happens-if-the-driver-program-crashes


On Sat, Mar 13, 2021 at 5:21 AM Lalwani, Jayesh <[hidden email]> wrote:

Can I cut a steak with a hammer? Sure you can, but the steak would taste awful

 

Do you have organizational/bureaucratic issues with using a Load Balancer? Because that’s what you really need. Run your application on multiple nodes with a load balancer in front. When a node crashes, the load balancer will shift the traffic to the healthy node until the crashed node recovers.

 

From: Sergey Oboguev <[hidden email]>
Date: Friday, March 12, 2021 at 2:53 PM
To: User <[hidden email]>
Subject: [EXTERNAL] Using Spark as a fail-over platform for Java app

 

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.

 

I have an existing plain-Java (non-Spark) application that needs to run in a fault-tolerant way, i.e. if the node crashes then the application is restarted on another node, and if the application crashes because of internal fault, the application is restarted too.

Normally I would run it in a Kubernetes, but in this specific case Kubernetes is unavailable because of organizational/bureaucratic issues, and the only execution platform available in the domain is Spark.

Is it possible to wrap the application into a Spark-based launcher that will take care of executing the application and restarts?

Execution must be in a separate JVM, apart from other apps.

And for optimum performance, the application also needs to be assigned guaranteed resources, i.e. the number of cores and amount of RAM required for it, so it would be great if the launcher could take care of this too.

Thanks for advice.