Spark SQL saveAsTable is not compatible with Hive when partition is specified

20,725

Solution 1

That's because DataFrame.saveAsTable creates RDD partitions but not Hive partitions, the workaround is to create the table via hql before calling DataFrame.saveAsTable. An example from SPARK-14927 looks like this:

hc.sql("create external table tmp.partitiontest1(val string) partitioned by (year int)")

Seq(2012 -> "a", 2013 -> "b", 2014 -> "c").toDF("year", "val")
  .write
  .partitionBy("year")
  .mode(SaveMode.Append)
  .saveAsTable("tmp.partitiontest1")

Solution 2

A solution is to create the table with Hive and then save the data with ...partitionBy("year").insertInto("default.mytable").

In my experience, creating the table in Hive and then using ...partitionBy("year").saveAsTable("default.mytable") did not work. This is with Spark 1.6.2.

Share:
20,725
dunlu_98k
Author by

dunlu_98k

Updated on July 09, 2022

Comments

  • dunlu_98k
    dunlu_98k almost 2 years

    Kind of edge case, when saving parquet table in Spark SQL with partition,

    #schema definitioin
    final StructType schema = DataTypes.createStructType(Arrays.asList(
        DataTypes.createStructField("time", DataTypes.StringType, true),
        DataTypes.createStructField("accountId", DataTypes.StringType, true),
        ...
    
    DataFrame df = hiveContext.read().schema(schema).json(stringJavaRDD);
    
    df.coalesce(1)
        .write()
        .mode(SaveMode.Append)
        .format("parquet")
        .partitionBy("year")
        .saveAsTable("tblclick8partitioned");
    

    Spark warns:

    Persisting partitioned data source relation into Hive metastore in Spark SQL specific format, which is NOT compatible with Hive

    In Hive:

    hive> describe tblclick8partitioned;
    OK
    col                     array<string>           from deserializer
    Time taken: 0.04 seconds, Fetched: 1 row(s)
    

    Obviously the schema is not correct - however if I use saveAsTable in Spark SQL without partition the table can be queried without problem.

    Question is how can I make a parquet table in Spark SQL compatible with Hive with partition info?

  • Tim Malone
    Tim Malone over 7 years
    Hi rys, welcome to StackOverflow and thank you for your answer. As links can change over time, would you be able to edit your answer to quote the relevant portions of the solution here? Thanks!
  • WBC
    WBC about 6 years
    I think this won't work with spark 2.0.0, i get User class threw exception: org.apache.spark.sql.AnalysisException: insertInto() can't be used together with partitionBy(). Partition columns have already be defined for the table. It is not necessary to use partitionBy().;