Updating dynamic partitioned hive table throws error - Partition spec contains non-partition columns

classic Classic list List threaded Threaded
1 message Options
nir
Reply | Threaded
Open this post in threaded view
|

Updating dynamic partitioned hive table throws error - Partition spec contains non-partition columns

nir

I am using spark 2.2.1 and hive2.1. I am trying to insert overwrite multiple partitions into existing partitioned hive/parquet table.

Table was created using sparkSession.

I have a table 'mytable' with partitions P1 and P2.

I have following set on sparkSession object:

"hive.exec.dynamic.partition"=true
"hive.exec.dynamic.partition.mode"="nonstrict"

Code:

val df = spark.read.csv(pathToNewData) df.createOrReplaceTempView("updateTable") //here 'df' may contains data from multiple partitions. i.e. multiple values for P1 and P2 in data.

spark.sql("insert overwrite table mytable PARTITION(P1, P2) select c1, c2,..cn, P1, P2 from updateTable") // I made sure that partition columns P1 and P2 are at the end of projection list.

I am getting following error:

org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.Table.ValidationFailureSemanticException: Partition spec {p1=, p2=, P1=1085, P2=164590861} contains non-partition columns;

dataframe 'df' have records for P1=1085, P2=164590861 . It looks like issue with casing (lower vs upper). I tried both cases in my query but it's still not working.

It works if I use static partitioning:
spark.sql("insert overwrite table mytable PARTITION(P1=1085, P2=164590861) select c1, c2,..cn, P1, P2 from updateTable where P1=1085 and P2=164590861") 
But this is not what I am looking for. I need to get dynamic partitioning updates working.

Thanks