Convert file of JSON objects to Parquet file

11,934

Solution 1

Kite has support for importing JSON to both Avro and Parquet formats via its command-line utility, kite-dataset.

First, you would infer the schema of your JSON:

kite-dataset json-schema sample-file.json -o schema.avsc

Then you can use that file to create a Parquet Hive table:

kite-dataset create mytable --schema schema.avsc --format parquet

And finally, you can load your JSON into the dataset.

kite-dataset json-import sample-file.json mytable

You can also import an entire directly stored in HDFS. In that case, Kite will use a MR job to do the import.

Solution 2

You can actually use Drill itself to create a parquet file from the output of any query.

create table student_parquet as select * from `student.json`;

The above line should be good enough. Drill interprets the types based on the data in the fields. You can substitute your own query and create a parquet file.

Solution 3

To complete the answer of @rahul, you can use drill to do this - but I needed to add more to the query to get it working out of the box with drill.

create table dfs.tmp.`filename.parquet` as select * from dfs.`/tmp/filename.json` t

I needed to give it the storage plugin (dfs) and the "root" config can read from the whole disk and is not writable. But the tmp config (dfs.tmp) is writable and writes to /tmp. So I wrote to there.

But the problem is that if the json is nested or perhaps contains unusual characters, I would get a cryptic

org.apache.drill.common.exceptions.UserRemoteException: SYSTEM ERROR: java.lang.IndexOutOfBoundsException:

If I have a structure that looks like members: {id:123, name:"joe"} I would have to change the select to

select members.id as members_id, members.name as members_name

or

select members.id as `members.id`, members.name as `members.name`

to get it to work.

I assume the reason is that parquet is a "column" store so you need columns. JSON isn't by default so you need to convert it.

The problem is I have to know my json schema and I have to build the select to include all the possibilities. I'd be happy if some knows a better way to do this.

Share:
11,934

Related videos on Youtube

danieltahara
Author by

danieltahara

A student. A coder. A thinker. A joker.

Updated on June 04, 2022

Comments

  • danieltahara
    danieltahara almost 2 years

    Motivation: I want to load the data into Apache Drill. I understand that Drill can handle JSON input, but I want to see how it performs on Parquet data.

    Is there any way to do this without first loading the data into Hive, etc and then using one of the Parquet connectors to generate an output file?

  • Yehosef
    Yehosef almost 9 years
    FYI - I tried this and it needs a hadoop installation. I'm also trying to convert the files for use with drill and I don't have hadoop.
  • blue
    blue over 8 years
    Correct. The link above is for installing on a Hadoop cluster. If you want, there is also a tarball distribution in maven central that includes the dependencies. The trouble with that one is that you have to decide what those dependencies should be, which varies by Hadoop distribution. That's why I recommend using the instructions that get the Hadoop dependencies from the cluster you're running on.
  • rahul
    rahul over 8 years
    No need to select the individual columns. A simple select * should work. Since you are seeing an error, I guess your 'members' structure has different number of fields in different records. And also be aware drill as of today cannot handle schema changes within the same column. So if you have a column 'val1' which has a combination of string and integers for different records, then drill will not be able to handle it.