Spark Structured Streaming using S3 as data source

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Spark Structured Streaming using S3 as data source

sherif98
I have data that is continuously pushed to multiple S3 buckets. I want to set
up a structured streaming application that uses the S3 buckets as the data
source and do stream-stream joins.

My question is if the application is down for some reason, will restarting
the application would continue processing data from the S3 where it left
off?



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Spark Structured Streaming using S3 as data source

Burak Yavuz-2
Yes, the checkpoint makes sure that you start off from where you left off.

On Sun, Aug 26, 2018 at 2:22 AM sherif98 <[hidden email]> wrote:
I have data that is continuously pushed to multiple S3 buckets. I want to set
up a structured streaming application that uses the S3 buckets as the data
source and do stream-stream joins.

My question is if the application is down for some reason, will restarting
the application would continue processing data from the S3 where it left
off?



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Spark Structured Streaming using S3 as data source

sherif98
Thanks for the reply, to make sure I got this right.

if I have 5 JSON files with 100 records in each file.
 And for example, spark failed while processing the tenth record in the 3rd file.
When the query runs again it will begin processing from the tenth record in the 3rd file, did I get that right?

Assuming that checkpointing is used.

On Mon, Aug 27, 2018 at 12:11 AM Burak Yavuz <[hidden email]> wrote:
Yes, the checkpoint makes sure that you start off from where you left off.

On Sun, Aug 26, 2018 at 2:22 AM sherif98 <[hidden email]> wrote:
I have data that is continuously pushed to multiple S3 buckets. I want to set
up a structured streaming application that uses the S3 buckets as the data
source and do stream-stream joins.

My question is if the application is down for some reason, will restarting
the application would continue processing data from the S3 where it left
off?



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]