Hive enforces schema during read time?

11,626

You touch on one of the reasons why Hadoop and other NoSQL strategies have been so successful, so I'm not sure if you were expecting to get a dissertation or not, but here it is! The extra flexibility and agility in data analysis has probably contributed to the explosion of "data science", just because it makes large-scale data analysis easier in general.

A traditional relational database stores the data with schema in mind. It knows that the second column is an integer, it knows that it has 40 columns, etc. Therefore, you need to specify your schema ahead of time and have it well planned out. This is "schema on write" -- that is, the schema is applied when the data is being written to the data store.

Hive (in some cases), Hadoop, and many other NoSQL systems in general are about "schema on read" -- the schema is applied as the data is being read off of the data store. Consider the following line of raw text:

A:B:C~E:F~G:H~~I::J~K~L

There are a couple ways to interpret this. ~ could be the delimiter or maybe : could be the delimiter. Who knows? With schema on read, it doesn't matter. You decide what the schema is when you analyze the data, not when you write the data. This example is a bit ridiculous in that you probably won't ever encounter this case, but it gets the point across hopefully.

With schema on read, you just load your data into the data store and think about how to parse and interpret later. At the core of this explanation, schema on read means write your data first, figure out what it is later. Schema on write means figure out what your data is first, then write it after.


There is a tradeoff here. Some of these are subjective and my own opinion.

Benefits of schema on write:

  • Better type safety and data cleansing done for the data at rest
  • Typically more efficient (storage size and computationally) since the data is already parsed

Downsides of schema on write:

  • You have to plan ahead of time what your schema is before you store the data (i.e., you have to do ETL)
  • Typically you throw away the original data, which could be bad if you have a bug in your ingest process
  • It's harder to have different views of the same data

Benefits of schema on read:

  • Flexibility in defining how your data is interpreted at load time
    • This gives you the ability to evolve your "schema" as time goes on
    • This allows you to have different versions of your "schema"
    • This allows the original source data format to change without having to consolidate to one data format
  • You get to keep your original data
  • You can load your data before you know what to do with it (so you don't drop it on the ground)
  • Gives you flexibility in being able to store unstructured, unclean, and/or unorganized data

Downsides of schema on read:

  • Generally it is less efficient because you have to reparse and reinterpret the data every time (this can be expensive with formats like XML)
  • The data is not self-documenting (i.e., you can't look at a schema to figure out what the data is)
  • More error prone and your analytics have to account for dirty data
Share:
11,626
London guy
Author by

London guy

Passionate about Machine Learning, Analytics, Information Extraction/Retrieval and Search.

Updated on June 04, 2022

Comments

  • London guy
    London guy almost 2 years

    What is the difference and meaning of these two statements that I encountered during a lecture here:

    1. Traditional databases enforce schema during load time.
    

    and

    2. Hive enforces schema during read time.
    
  • Paul M
    Paul M almost 12 years
    great explanation of the benefits of schema on read. We use both hive and a fairly large data warehouse appliance in our environment (an MPP RDBMS). While there is no denying the raw speed of the appliance, the flexibility of hive has been a real boon for productivity.
  • London guy
    London guy almost 12 years
    I just have one question. What do you mean by "Throw/Keep your original data"? What is original data and why you throw in one of the cases and keep in the other? Just this point.
  • Donald Miner
    Donald Miner almost 12 years
    In Hadoop, in most cases, you load your data into the file system and just leave it. If you have your post-processed data into the RDBMS, you typically have nowhere to put your original data.
  • Jyoti Ranjan
    Jyoti Ranjan over 6 years
    Don't you think, while loading the data into a table, the table is already created in hive, with the delimited by character and other information? The data is parsed as per the delimited character of the table.