SPARK-25959 - Difference in featureImportances results on computed vs saved models
Hi Spark Users,
I tried to implement GBT and found that the feature Importance computed while the model was fit is different when the same model was saved into a storage and loaded back.
I also found that once the persistent model is loaded and saved back again and loaded, the feature importance remains the same.
Not sure if its bug while storing and reading the model first time or am missing some parameter that need to be set before saving the model (thus model is picking some defaults - causing feature importance to change)
val featureColumns = testDF.columns.filter(_ != "e") // Assemble the features into a vector val assembler = new VectorAssembler().setInputCols(featureColumns).setOutputCol("features") // Transform the data to get the feature data set val featureDF = assembler.transform(testDF)
// Train a GBT model. val gbt = new GBTClassifier() .setLabelCol("e") .setFeaturesCol("features") .setMaxDepth(2) .setMaxBins(5) .setMaxIter(10) .setSeed(10) .fit(featureDF)
// Write out the model
featureColumns.zip(gbt.featureImportances.toArray).sortBy(-_._2).take(20).foreach(println) /* Prints