[ML] LogisticRegression and dataset's standardization before training
LogisticAggregator  scales every sample on every iteration. Without
scaling binaryUpdateInPlace could be rewritten using BLAS.dot and that
would significantly improve performance.
However, there is a comment  saying that standardization and
caching of the dataset before training will "create a lot of
What kind of overhead it is all about and what is rationale to avoid
scaling dataset prior training?