[Spark API] - Dynamic precision for same BigDecimal value

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[Spark API] - Dynamic precision for same BigDecimal value

IrinaStan
Hi team,

I'm a software developer, working with Apache Spark.

Last week I have encountered a strange issue, which might be a bug.

I see different precision for the same BigDecimal value, when calling the map() once against a dataFrame created as val df = sc.parallelize(seq).toDF(), and second when calling map() against a dataFrame created as val df = sc.parallelize(seq).toDF().limit(2)

For more details i have created a small example, which can be found at the following link:

https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/7958346027016861/2296698945593142/5693253843748751/latest.html

Hope the example is clear enough.
I am waiting for your response.

Thank you for your time,
Irina Stan