[Structured Streaming] Reuse computation result

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

[Structured Streaming] Reuse computation result

nezhazheng
Hi all,

I have a scenario like this:

val df = dataframe.map().filter()
// agg 1
val query1 = df.sum.writeStream.start
// agg 2
val query2 = df.count.writeStream.start

With spark streaming, we can apply persist() on rdd to reuse the df computation result, when we call persist() after filter() map().filter() operator only run once.
With SS, we can’t apply persist() direct on dataframe. query1 and query2 will not reuse result after filter. map/filter run twice. So is there a way to solve this. 

Regards,

Shu li Zheng

Reply | Threaded
Open this post in threaded view
|

Re: [Structured Streaming] Reuse computation result

JayeshLalwani

There is no way to solve this within spark.

 

One option you could do is break up your application into multiple application. First application can filter and write the filtered results into a kafka queue. Second application can read from queue and sum. Third application can read from queue and do count.

 

From: Shu Li Zheng <[hidden email]>
Date: Tuesday, December 26, 2017 at 5:32 AM
To: "[hidden email]" <[hidden email]>
Subject: [Structured Streaming] Reuse computation result

 

Hi all,

 

I have a scenario like this:

 

val df = dataframe.map().filter()

// agg 1

val query1 = df.sum.writeStream.start

// agg 2

val query2 = df.count.writeStream.start

 

With spark streaming, we can apply persist() on rdd to reuse the df computation result, when we call persist() after filter() map().filter() operator only run once.

With SS, we can’t apply persist() direct on dataframe. query1 and query2 will not reuse result after filter. map/filter run twice. So is there a way to solve this. 


Regards,

 

Shu li Zheng





The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.

Reply | Threaded
Open this post in threaded view
|

Re: [Structured Streaming] Reuse computation result

Sandip Mehta
In reply to this post by nezhazheng
You can use persist() or cache() operation on DataFrame.

On Tue, Dec 26, 2017 at 4:02 PM Shu Li Zheng <[hidden email]> wrote:
Hi all,

I have a scenario like this:

val df = dataframe.map().filter()
// agg 1
val query1 = df.sum.writeStream.start
// agg 2
val query2 = df.count.writeStream.start

With spark streaming, we can apply persist() on rdd to reuse the df computation result, when we call persist() after filter() map().filter() operator only run once.
With SS, we can’t apply persist() direct on dataframe. query1 and query2 will not reuse result after filter. map/filter run twice. So is there a way to solve this. 

Regards,

Shu li Zheng