Spark: error reading DateType columns in partitioned parquet data

22,433

I just used StringType instead of DateType when writing parquet. Don't have the issue anymore.

Share:
22,433
Kamil Sindi
Author by

Kamil Sindi

Updated on May 26, 2021

Comments

  • Kamil Sindi
    Kamil Sindi almost 3 years

    I have parquet data in S3 partitioned by nyc_date in the format s3://mybucket/mykey/nyc_date=Y-m-d/*.gz.parquet.

    I have a DateType column event_date that for some reason throws this error when I try to read from S3 and write to hdfs using EMR.

    from pyspark.sql import SparkSession
    
    spark = SparkSession.builder.enableHiveSupport().getOrCreate()
    df = spark.read.parquet('s3a://mybucket/mykey/') 
    
    df.limit(100).write.parquet('hdfs:///output/', compression='gzip')
    

    Error:

    java.lang.UnsupportedOperationException: org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainBinaryDictionary
        at org.apache.parquet.column.Dictionary.decodeToInt(Dictionary.java:48)
        at org.apache.spark.sql.execution.vectorized.OnHeapColumnVector.getInt(OnHeapColumnVector.java:233)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
        at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
        at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
        at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:389)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
        at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
        at org.apache.spark.scheduler.Task.run(Task.scala:86)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    

    Here's what I figured out:

    • Local works :-): I copied over some data locally in the same format and can query fine.
    • Avoid selecting event_date works :-): Selecting all 50+ columns but for event_date doesn't cause any errors.
    • Explicit read path throws error :-(: Changing the read path to 's3a://mybucket/mykey/*/*.gz.parquet' still throws error.
    • Specifying schema still throws error :-(: specifying the schema before loading still causes the same error.
    • I can load the data including eastern_date into a data warehouse :-).

    Really weird this causes an error only for a DateType column. I don't have any other DateType columns.

    Using Spark 2.0.2 and EMR 5.2.0.

  • Eric Bellet
    Eric Bellet about 2 years
    I can't rewrite the files because I'm not the owner. I can just read the table with Spark SQL... Can I solve the problem with some config or creating the table on top of the parquets?