Weight column values not used in Binary Logistic Regression Summary

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Weight column values not used in Binary Logistic Regression Summary

Stephen Boesch
In BinaryLogisticRegressionSummary there are @Since("1.5.0") tags on a number of comments identical to the following:

* @note This ignores instance weights (setting all to 1.0) from `LogisticRegression.weightCol`.
* This will change in later Spark versions.

Are there any plans to address this? Our team is using instance weights with sklearn LogisticRegression - and this limitation will complicate a potential migration.



Reply | Threaded
Open this post in threaded view
|

Re: Weight column values not used in Binary Logistic Regression Summary

Sea aj
Hello everyone, 

I have a data frame which has two columns: ids and features

each cell in feature column is an array of Vectors.dense type. 
like:
[(DenseVector([0.5692]),), (DenseVector([0.5086]),)]

I need to train a new model for every single row of my data frame. How can I do it?






On Sat, Nov 18, 2017 at 9:53 AM, Stephen Boesch <[hidden email]> wrote:
In BinaryLogisticRegressionSummary there are @Since("1.5.0") tags on a number of comments identical to the following:

* @note This ignores instance weights (setting all to 1.0) from `LogisticRegression.weightCol`.
* This will change in later Spark versions.

Are there any plans to address this? Our team is using instance weights with sklearn LogisticRegression - and this limitation will complicate a potential migration.