Spark Structured Streaming checkpointing with S3 data source

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Spark Structured Streaming checkpointing with S3 data source

sherif98
I have data that is continuously pushed to multiple S3 buckets. I want to set
up a structured streaming application that uses the S3 buckets as the data
source and do stream-stream joins.

My question is if the application is down for some reason, will restarting
the application would continue processing data from the S3 where it left
off?

So for example, if I have 5 JSON files with 100 records in each file. And
spark failed while processing the tenth record in the 3rd file. When the
query runs again will it begin processing from the tenth record in the 3rd
file?





--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]