_corrupt_record error when reading a JSON file into Spark

51,395

Solution 1

You need to have one json object per row in your input file, see http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameReader.json

If your json file looks like this it will give you the expected dataframe:

{ "a": 1, "b": 2 }
{ "a": 3, "b": 4 }

....
df.show()
+---+---+
|  a|  b|
+---+---+
|  1|  2|
|  3|  4|
+---+---+

Solution 2

If you want to leave your JSON file as it is (without stripping new lines characters \n), include multiLine=True keyword argument

sc = SparkContext() 
sqlc = SQLContext(sc)

df = sqlc.read.json('my_file.json', multiLine=True)

print df.show()

Solution 3

In Spark 2.2+ you can read json file of multiline using following command.

val dataframe = spark.read.option("multiline",true).json( " filePath ")

if there is json object per line then,

val dataframe = spark.read.json(filepath)

Solution 4

Adding to @Bernhard's great answer

# original file was written with pretty-print inside a list
with open("pretty-printed.json") as jsonfile:
    js = json.load(jsonfile)      

# write a new file with one object per line
with open("flattened.json", 'a') as outfile:
    for d in js:
        json.dump(d, outfile)
        outfile.write('\n')
Share:
51,395
mar tin
Author by

mar tin

Updated on July 16, 2022

Comments

  • mar tin
    mar tin almost 2 years

    I've got this JSON file

    {
        "a": 1, 
        "b": 2
    }
    

    which has been obtained with Python json.dump method. Now, I want to read this file into a DataFrame in Spark, using pyspark. Following documentation, I'm doing this

    sc = SparkContext()

    sqlc = SQLContext(sc)

    df = sqlc.read.json('my_file.json')

    print df.show()

    The print statement spits out this though:

    +---------------+
    |_corrupt_record|
    +---------------+
    |              {|
    |       "a": 1, |
    |         "b": 2|
    |              }|
    +---------------+
    

    Anyone knows what's going on and why it is not interpreting the file correctly?

  • M.Rez
    M.Rez about 7 years
    How can I fix it if my JSON file is huge (a couple of 100K rows) and it has a lot of new lines in between the records (columns or features)? thanks.
  • Ankita Mehta
    Ankita Mehta about 5 years
    This is scala, not python.
  • ttimasdf
    ttimasdf over 4 years
    Maybe use jq to reformat(compact) the file? @M.Rez
  • Skippy le Grand Gourou
    Skippy le Grand Gourou almost 3 years
    @ttimasdf Namely, using jq -c option.
  • Christian Singer
    Christian Singer about 2 years
    Still works in Python however spark.read.option("multiline",True).json('filePath')