Spark 1.x - End of life

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Spark 1.x - End of life

Ismaël Mejía
Hello,

I noticed that some of the (Big Data / Cloud Managed) Hadoop
distributions are starting to (phase out / deprecate) Spark 1.x and I
was wondering if the Spark community has already decided when will it
end the support for Spark 1.x. I ask this also considering that the
latest release in the series is already almost one year old. Any idea
on this ?

Thanks,
Ismaël

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: Spark 1.x - End of life

Matei Zaharia
Administrator
Hi Ismael,

It depends on what you mean by “support”. In general, there won’t be new feature releases for 1.X (e.g. Spark 1.7) because all the new features are being added to the master branch. However, there is always room for bug fix releases if there is a catastrophic bug, and committers can make those at any time. In general though, I’d recommend moving workloads to Spark 2.x. We tried to make the migration as easy as possible (a few APIs changed, but not many), and 2.x has been out for a long time now and is widely used.

We should perhaps write a more explicit maintenance policy, but all of this is run based on what committers want to work on; if someone thinks that there’s a serious enough issue in 1.6 to update it, they can put together a new release. It does help to hear from users about this though, e.g. if you think there’s a significant issue that people are missing.

Matei

> On Oct 19, 2017, at 5:20 AM, Ismaël Mejía <[hidden email]> wrote:
>
> Hello,
>
> I noticed that some of the (Big Data / Cloud Managed) Hadoop
> distributions are starting to (phase out / deprecate) Spark 1.x and I
> was wondering if the Spark community has already decided when will it
> end the support for Spark 1.x. I ask this also considering that the
> latest release in the series is already almost one year old. Any idea
> on this ?
>
> Thanks,
> Ismaël
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: [hidden email]
>


---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Fwd: Spark 1.x - End of life

Ismaël Mejía
Thanks for your answer Matei. I agree that a more explicit maintenance
policy is needed (even for the 2.x releases). I did not immediately
find anything about this in the website, so I ended up assuming the
information of the wikipedia article that says that the 1.6.x line is
still maintained.

I see that Spark as an open source project can get updates if the
community brings them in, but it is probably also a good idea to be
clear about the expectations for the end users. I suppose some users
who can migrate to version 2 won’t do it if there is still support
(notice that ‘support’ can be tricky considering how different
companies re-package/maintain Spark but this is a different
discussion). Anyway it would be great to have this defined somewhere.
Maybe worth a discussion on dev@.

On Thu, Oct 19, 2017 at 11:20 PM, Matei Zaharia <[hidden email]> wrote:

> Hi Ismael,
>
> It depends on what you mean by “support”. In general, there won’t be new feature releases for 1.X (e.g. Spark 1.7) because all the new features are being added to the master branch. However, there is always room for bug fix releases if there is a catastrophic bug, and committers can make those at any time. In general though, I’d recommend moving workloads to Spark 2.x. We tried to make the migration as easy as possible (a few APIs changed, but not many), and 2.x has been out for a long time now and is widely used.
>
> We should perhaps write a more explicit maintenance policy, but all of this is run based on what committers want to work on; if someone thinks that there’s a serious enough issue in 1.6 to update it, they can put together a new release. It does help to hear from users about this though, e.g. if you think there’s a significant issue that people are missing.
>
> Matei
>
>> On Oct 19, 2017, at 5:20 AM, Ismaël Mejía <[hidden email]> wrote:
>>
>> Hello,
>>
>> I noticed that some of the (Big Data / Cloud Managed) Hadoop
>> distributions are starting to (phase out / deprecate) Spark 1.x and I
>> was wondering if the Spark community has already decided when will it
>> end the support for Spark 1.x. I ask this also considering that the
>> latest release in the series is already almost one year old. Any idea
>> on this ?
>>
>> Thanks,
>> Ismaël
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: [hidden email]
>>
>

---------------------------------------------------------------------
To unsubscribe e-mail: [hidden email]