[Spark API] - Dynamic precision for same BigDecimal value
I'm a software developer, working with Apache Spark.
Last week I have encountered a strange issue, which might be a bug.
I see different precision for the same BigDecimal value, when calling the map() once against a dataFrame created as val df = sc.parallelize(seq).toDF(), and second when calling map() against a dataFrame created as val df = sc.parallelize(seq).toDF().limit(2)
For more details i have created a small example, which can be found at the following link: