pandas.io.json.json_normalize with very nested json

61,184

Solution 1

In the pandas example (below) what do the brackets mean? Is there a logic to be followed to go deeper with the []. [...]

result = json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])

Each string or list of strings in the ['state', 'shortname', ['info', 'governor']] value is a path to an element to include, in addition to the selected rows. The second argument json_normalize() argument (record_path, set to 'counties' in the documentation example) tells the function how to select elements from the input data structure that make up the rows in the output, and the meta paths adds further metadata that will be included with each of the rows. Think of these as table joins in a database, if you will.

The input for the US States documentation example has two dictionaries in a list, and both of these dictionaries have a counties key that references another list of dicts:

>>> data = [{'state': 'Florida',
...          'shortname': 'FL',
...         'info': {'governor': 'Rick Scott'},
...         'counties': [{'name': 'Dade', 'population': 12345},
...                      {'name': 'Broward', 'population': 40000},
...                      {'name': 'Palm Beach', 'population': 60000}]},
...         {'state': 'Ohio',
...          'shortname': 'OH',
...          'info': {'governor': 'John Kasich'},
...          'counties': [{'name': 'Summit', 'population': 1234},
...                       {'name': 'Cuyahoga', 'population': 1337}]}]
>>> pprint(data[0]['counties'])
[{'name': 'Dade', 'population': 12345},
 {'name': 'Broward', 'population': 40000},
 {'name': 'Palm Beach', 'population': 60000}]
>>> pprint(data[1]['counties'])
[{'name': 'Summit', 'population': 1234},
 {'name': 'Cuyahoga', 'population': 1337}]

Between them there are 5 rows of data to use in the output:

>>> json_normalize(data, 'counties')
         name  population
0        Dade       12345
1     Broward       40000
2  Palm Beach       60000
3      Summit        1234
4    Cuyahoga        1337

The meta argument then names some elements that live next to those counties lists, and those are then merged in separately. The values from the first data[0] dictionary for those meta elements are ('Florida', 'FL', 'Rick Scott'), respectively, and for data[1] the values are ('Ohio', 'OH', 'John Kasich'), so you see those values attached to the counties rows that came from the same top-level dictionary, repeated 3 and 2 times respectively:

>>> data[0]['state'], data[0]['shortname'], data[0]['info']['governor']
('Florida', 'FL', 'Rick Scott')
>>> data[1]['state'], data[1]['shortname'], data[1]['info']['governor']
('Ohio', 'OH', 'John Kasich')
>>> json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
         name  population    state shortname info.governor
0        Dade       12345  Florida        FL    Rick Scott
1     Broward       40000  Florida        FL    Rick Scott
2  Palm Beach       60000  Florida        FL    Rick Scott
3      Summit        1234     Ohio        OH   John Kasich
4    Cuyahoga        1337     Ohio        OH   John Kasich

So, if you pass in a list for the meta argument, then each element in the list is a separate path, and each of those separate paths identifies data to add to the rows in the output.

In your example JSON, there are only a few nested lists to elevate with the first argument, like 'counties' did in the example. The only example in that datastructure is the nested 'authors' key; you'd have to extract each ['_source', 'authors'] path, after which you can add other keys from the parent object to augment those rows.

The second meta argument then pulls in the _id key from the outermost objects, followed by the nested ['_source', 'title'] and ['_source', 'journal'] nested paths.

The record_path argument takes the authors lists as the starting point, these look like:

>>> d['hits']['hits'][0]['_source']['authors']   # this value is None, and is skipped
>>> d['hits']['hits'][1]['_source']['authors']
[{'affiliations': ['Punjabi University'],
  'author_id': '780E3459',
  'author_name': 'munish puri'},
 {'affiliations': ['Punjabi University'],
  'author_id': '48D92C79',
  'author_name': 'rajesh dhaliwal'},
 {'affiliations': ['Punjabi University'],
  'author_id': '7D9BD37C',
  'author_name': 'r s singh'}]
>>> d['hits']['hits'][2]['_source']['authors']
[{'author_id': '7FF872BC',
  'author_name': 'barbara eileen ryan'}]
>>> # etc.

and so gives you the following rows:

>>> json_normalize(d['hits']['hits'], ['_source', 'authors'])
           affiliations author_id          author_name
0  [Punjabi University]  780E3459          munish puri
1  [Punjabi University]  48D92C79      rajesh dhaliwal
2  [Punjabi University]  7D9BD37C            r s singh
3                   NaN  7FF872BC  barbara eileen ryan
4                   NaN  0299B8E9     fraser j harbutt
5                   NaN  7DAB7B72   richard m freeland

and then we can use the third meta argument to add more columns like _id, _source.title and _source.journal, using ['_id', ['_source', 'journal'], ['_source', 'title']]:

>>> json_normalize(
...     data['hits']['hits'],
...     ['_source', 'authors'],
...     ['_id', ['_source', 'journal'], ['_source', 'title']]
... )
           affiliations author_id          author_name       _id   \
0  [Punjabi University]  780E3459          munish puri  7AF8EBC3  
1  [Punjabi University]  48D92C79      rajesh dhaliwal  7AF8EBC3
2  [Punjabi University]  7D9BD37C            r s singh  7AF8EBC3
3                   NaN  7FF872BC  barbara eileen ryan  7521A721
4                   NaN  0299B8E9     fraser j harbutt  7DAEB9A4
5                   NaN  7DAB7B72   richard m freeland  7B3236C5

                                     _source.journal
0  Journal of Industrial Microbiology & Biotechno...
1  Journal of Industrial Microbiology & Biotechno...
2  Journal of Industrial Microbiology & Biotechno...
3                     The American Historical Review
4                     The American Historical Review
5                     The American Historical Review

                                       _source.title  \
0  Development of a stable continuous flow immobi...
1  Development of a stable continuous flow immobi...
2  Development of a stable continuous flow immobi...
3  Feminism and the women's movement : dynamics o...
4  The iron curtain : Churchill, America, and the...
5  The Truman Doctrine and the origins of McCarth...

Solution 2

You can also have a look at the library flatten_json, which does not require you to write column hierarchies as in json_normalize:

from flatten_json import flatten

data = d['hits']['hits']
dict_flattened = (flatten(record, '.') for record in data)
df = pd.DataFrame(dict_flattened)
print(df)

See https://github.com/amirziai/flatten.

Share:
61,184

Related videos on Youtube

Daniel Vargas
Author by

Daniel Vargas

Updated on July 09, 2022

Comments

  • Daniel Vargas
    Daniel Vargas almost 2 years

    I have been trying to normalize a very nested json file I will later analyze. What I am struggling with is how to go more than one level deep to normalize.

    I went through the pandas.io.json.json_normalize documentation, since it does exactly what I want it to do.

    I have been able to normalize part of it and now understand how dictionaries work, but I am still not there.

    With below code I am able to get only the first level.

    import json
    import pandas as pd
    from pandas.io.json import json_normalize
    
    with open('authors_sample.json') as f:
        d = json.load(f)
    
    raw = json_normalize(d['hits']['hits'])
    
    authors = json_normalize(data = d['hits']['hits'], 
                             record_path = '_source', 
                             meta = ['_id', ['_source', 'journal'], ['_source', 'title'], 
                                     ['_source', 'normalized_venue_name']
                                     ])
    

    I am trying to 'dig' into the 'authors' dictionary with below code, but the record_path = ['_source', 'authors'] throws me TypeError: string indices must be integers. As far as I understand json_normalize the logic should be good, but I still don't quite understand how to dive into a json with dict vs list.

    I even went through this simple example.

    authors = json_normalize(data = d['hits']['hits'], 
                             record_path = ['_source', 'authors'], 
                             meta = ['_id', ['_source', 'journal'], ['_source', 'title'], 
                                     ['_source', 'normalized_venue_name']
                                     ])
    

    Below is a chunk of the json file (5 records).

    {u'_shards': {u'failed': 0, u'successful': 5, u'total': 5},
     u'hits': {u'hits': [{u'_id': u'7CB3F2AD',
        u'_index': u'scibase_listings',
        u'_score': 1.0,
        u'_source': {u'authors': None,
         u'deleted': 0,
         u'description': None,
         u'doi': u'',
         u'is_valid': 1,
         u'issue': None,
         u'journal': u'Physical Review Letters',
         u'link': None,
         u'meta_description': None,
         u'meta_keywords': None,
         u'normalized_venue_name': u'phys rev lett',
         u'pages': None,
         u'parent_keywords': [u'Chromatography',
          u'Quantum mechanics',
          u'Particle physics',
          u'Quantum field theory',
          u'Analytical chemistry',
          u'Quantum chromodynamics',
          u'Physics',
          u'Mass spectrometry',
          u'Chemistry'],
         u'pub_date': u'1987-03-02 00:00:00',
         u'pubtype': None,
         u'rating_avg_weighted': 0,
         u'rating_clarity': 0.0,
         u'rating_clarity_weighted': 0.0,
         u'rating_innovation': 0.0,
         u'rating_innovation_weighted': 0.0,
         u'rating_num_weighted': 0,
         u'rating_reproducability': 0,
         u'rating_reproducibility_weighted': 0.0,
         u'rating_versatility': 0.0,
         u'rating_versatility_weighted': 0.0,
         u'review_count': 0,
         u'tag': [u'mass spectra', u'elementary particles', u'bound states'],
         u'title': u'Evidence for a new meson: A quasinuclear NN-bar bound state',
         u'userAvg': 0.0,
         u'user_id': None,
         u'venue_name': u'Physical Review Letters',
         u'views_count': 0,
         u'volume': None},
        u'_type': u'listing'},
       {u'_id': u'7AF8EBC3',
        u'_index': u'scibase_listings',
        u'_score': 1.0,
        u'_source': {u'authors': [{u'affiliations': [u'Punjabi University'],
           u'author_id': u'780E3459',
           u'author_name': u'munish puri'},
          {u'affiliations': [u'Punjabi University'],
           u'author_id': u'48D92C79',
           u'author_name': u'rajesh dhaliwal'},
          {u'affiliations': [u'Punjabi University'],
           u'author_id': u'7D9BD37C',
           u'author_name': u'r s singh'}],
         u'deleted': 0,
         u'description': None,
         u'doi': u'',
         u'is_valid': 1,
         u'issue': None,
         u'journal': u'Journal of Industrial Microbiology & Biotechnology',
         u'link': None,
         u'meta_description': None,
         u'meta_keywords': None,
         u'normalized_venue_name': u'j ind microbiol biotechnol',
         u'pages': None,
         u'parent_keywords': [u'Nuclear medicine',
          u'Psychology',
          u'Hydrology',
          u'Chromatography',
          u'X-ray crystallography',
          u'Nuclear fusion',
          u'Medicine',
          u'Fluid dynamics',
          u'Thermodynamics',
          u'Physics',
          u'Gas chromatography',
          u'Radiobiology',
          u'Engineering',
          u'Organic chemistry',
          u'High-performance liquid chromatography',
          u'Chemistry',
          u'Organic synthesis',
          u'Psychotherapist'],
         u'pub_date': u'2008-04-04 00:00:00',
         u'pubtype': None,
         u'rating_avg_weighted': 0,
         u'rating_clarity': 0.0,
         u'rating_clarity_weighted': 0.0,
         u'rating_innovation': 0.0,
         u'rating_innovation_weighted': 0.0,
         u'rating_num_weighted': 0,
         u'rating_reproducability': 0,
         u'rating_reproducibility_weighted': 0.0,
         u'rating_versatility': 0.0,
         u'rating_versatility_weighted': 0.0,
         u'review_count': 0,
         u'tag': [u'flow rate',
          u'operant conditioning',
          u'packed bed reactor',
          u'immobilized enzyme',
          u'specific activity'],
         u'title': u'Development of a stable continuous flow immobilized enzyme reactor for the hydrolysis of inulin',
         u'userAvg': 0.0,
         u'user_id': None,
         u'venue_name': u'Journal of Industrial Microbiology & Biotechnology',
         u'views_count': 0,
         u'volume': None},
        u'_type': u'listing'},
       {u'_id': u'7521A721',
        u'_index': u'scibase_listings',
        u'_score': 1.0,
        u'_source': {u'authors': [{u'author_id': u'7FF872BC',
           u'author_name': u'barbara eileen ryan'}],
         u'deleted': 0,
         u'description': None,
         u'doi': u'',
         u'is_valid': 1,
         u'issue': None,
         u'journal': u'The American Historical Review',
         u'link': None,
         u'meta_description': None,
         u'meta_keywords': None,
         u'normalized_venue_name': u'american historical review',
         u'pages': None,
         u'parent_keywords': [u'Social science',
          u'Politics',
          u'Sociology',
          u'Law'],
         u'pub_date': u'1992-01-01 00:00:00',
         u'pubtype': None,
         u'rating_avg_weighted': 0,
         u'rating_clarity': 0.0,
         u'rating_clarity_weighted': 0.0,
         u'rating_innovation': 0.0,
         u'rating_innovation_weighted': 0.0,
         u'rating_num_weighted': 0,
         u'rating_reproducability': 0,
         u'rating_reproducibility_weighted': 0.0,
         u'rating_versatility': 0.0,
         u'rating_versatility_weighted': 0.0,
         u'review_count': 0,
         u'tag': [u'social movements'],
         u'title': u"Feminism and the women's movement : dynamics of change in social movement ideology, and activism",
         u'userAvg': 0.0,
         u'user_id': None,
         u'venue_name': u'The American Historical Review',
         u'views_count': 0,
         u'volume': None},
        u'_type': u'listing'},
       {u'_id': u'7DAEB9A4',
        u'_index': u'scibase_listings',
        u'_score': 1.0,
        u'_source': {u'authors': [{u'author_id': u'0299B8E9',
           u'author_name': u'fraser j harbutt'}],
         u'deleted': 0,
         u'description': None,
         u'doi': u'',
         u'is_valid': 1,
         u'issue': None,
         u'journal': u'The American Historical Review',
         u'link': None,
         u'meta_description': None,
         u'meta_keywords': None,
         u'normalized_venue_name': u'american historical review',
         u'pages': None,
         u'parent_keywords': [u'Superconductivity',
          u'Nuclear fusion',
          u'Geology',
          u'Chemistry',
          u'Metallurgy'],
         u'pub_date': u'1988-01-01 00:00:00',
         u'pubtype': None,
         u'rating_avg_weighted': 0,
         u'rating_clarity': 0.0,
         u'rating_clarity_weighted': 0.0,
         u'rating_innovation': 0.0,
         u'rating_innovation_weighted': 0.0,
         u'rating_num_weighted': 0,
         u'rating_reproducability': 0,
         u'rating_reproducibility_weighted': 0.0,
         u'rating_versatility': 0.0,
         u'rating_versatility_weighted': 0.0,
         u'review_count': 0,
         u'tag': [u'iron'],
         u'title': u'The iron curtain : Churchill, America, and the origins of the Cold War',
         u'userAvg': 0.0,
         u'user_id': None,
         u'venue_name': u'The American Historical Review',
         u'views_count': 0,
         u'volume': None},
        u'_type': u'listing'},
       {u'_id': u'7B3236C5',
        u'_index': u'scibase_listings',
        u'_score': 1.0,
        u'_source': {u'authors': [{u'author_id': u'7DAB7B72',
           u'author_name': u'richard m freeland'}],
         u'deleted': 0,
         u'description': None,
         u'doi': u'',
         u'is_valid': 1,
         u'issue': None,
         u'journal': u'The American Historical Review',
         u'link': None,
         u'meta_description': None,
         u'meta_keywords': None,
         u'normalized_venue_name': u'american historical review',
         u'pages': None,
         u'parent_keywords': [u'Political Science', u'Economics'],
         u'pub_date': u'1985-01-01 00:00:00',
         u'pubtype': None,
         u'rating_avg_weighted': 0,
         u'rating_clarity': 0.0,
         u'rating_clarity_weighted': 0.0,
         u'rating_innovation': 0.0,
         u'rating_innovation_weighted': 0.0,
         u'rating_num_weighted': 0,
         u'rating_reproducability': 0,
         u'rating_reproducibility_weighted': 0.0,
         u'rating_versatility': 0.0,
         u'rating_versatility_weighted': 0.0,
         u'review_count': 0,
         u'tag': [u'foreign policy'],
         u'title': u'The Truman Doctrine and the origins of McCarthyism : foreign policy, domestic politics, and internal security, 1946-1948',
         u'userAvg': 0.0,
         u'user_id': None,
         u'venue_name': u'The American Historical Review',
         u'views_count': 0,
         u'volume': None},
        u'_type': u'listing'}],
      u'max_score': 1.0,
      u'total': 36429433},
     u'timed_out': False,
     u'took': 170}
    
  • RodrikTheReader
    RodrikTheReader about 4 years
    This answer is somehow difficult for me to understand. I wish the language were clearer.
  • Martijn Pieters
    Martijn Pieters about 4 years
    @RodrikTheReader: I've updated my answer, hopefully it's a little clearer now than it was before.
  • iRestMyCaseYourHonor
    iRestMyCaseYourHonor almost 4 years
    I wish you added the data to your answer so that it is much easier to follow. Still, this is the best explanation about json_normalize. Link to the aforementioned data example pandas.pydata.org/pandas-docs/stable/reference/api/…
  • Martijn Pieters
    Martijn Pieters almost 4 years
    @iRestMyCaseYourHonor: the data used to be part of the question, it's unfortunate that it was changed so much in that respect. I've added a reference and the dictionary to my answer now.
  • Sandy007
    Sandy007 over 3 years
    thank you for this solution. this helped me minimize lot of code i wrote using pandas normalize function
  • Adam DS
    Adam DS about 3 years
    Thank you for this! Didn't know about the flatten_json library before. This will minimize my code by quite a bit.