Amazon redshift: bulk insert vs COPYing from s3

25,643

Solution 1

Redshift is an Analytical DB, and it is optimized to allow you to query millions and billions of records. It is also optimized to allow you to ingest these records very quickly into Redshift using the COPY command.

The design of the COPY command is to work with parallel loading of multiple files into the multiple nodes of the cluster. For example, if you have a 5 small node (dw2.xl) cluster, you can copy data 10 times faster if you have your data is multiple number of files (20, for example). There is a balance between the number of files and the number of records in each file, as each file has some small overhead.

This should lead you to the balance between the frequency of the COPY, for example every 5 or 15 minutes and not every 30 seconds, and the size and number of the events files.

Another point to consider is the 2 types of Redshift nodes you have, the SSD ones (dw2.xl and dw2.8xl) and the magnetic ones (dx1.xl and dw1.8xl). The SSD ones are faster in terms of ingestion as well. Since you are looking for very fresh data, you probably prefer to run with the SSD ones, which are usually lower cost for less than 500GB of compressed data. If over time you have more than 500GB of compressed data, you can consider running 2 different clusters, one for "hot" data on SSD with the data of the last week or month, and one for "cold" data on magnetic disks with all your historical data.

Lastly, you don't really need to upload the data into S3, which is the major part of your ingestion timing. You can copy the data directly from your servers using the SSH COPY option. See more information about it here: http://docs.aws.amazon.com/redshift/latest/dg/loading-data-from-remote-hosts.html

If you are able to split your Redis queues to multiple servers or at least multiple queues with different log files, you can probably get very good records per second ingestion speed.

Another pattern that you may want to consider to allow near real time analytics is the usage of Amazon Kinesis, the streaming service. It allows to run analytics on data in delay of seconds, and in the same time prepare the data to copy into Redshift in a more optimized way.

Solution 2

S3 copy works faster in case of larger data loads. when you have say thousands-millions of records needs to be loaded to redshift then s3 upload + copy will work faster than insert queries.

S3 copy works in parallel mode.

When you create table and do insert then there is limit for batch size. The maximum size for a single SQL is 16 MB. So you need to take care size of SQL Batch ( depends on size of each insert query)

The S3 copy automatically applies encoding ( compression) for your table. When your create table and do sample load using copy then you can see compression automatically applied.

But if you are using insert command for beginning you will notice no compression applied which will result more space for table in redshift and slow query process timing in some cases.

If you wish to use insert commands, then create table with each column has applied encodings to save space and faster response time.

Solution 3

It might be worth implementing micro batching while performing bulk uploads to Redshift. This article may be worth reading as it does also contain other techniques to be followed for better performance of the COPY commmand.

http://blogs.aws.amazon.com/bigdata/post/Tx2ANLN1PGELDJU/Best-Practices-for-Micro-Batch-Loading-on-Amazon-Redshift

Solution 4

My test results differ a bit. I was loading CSV file to Redshift from OS Windows desktop.

  • Row insert was the slowest.
  • Multi-row insert was 5 times faster than row inset.
  • S3+COPY was 3 times faster than multi-row insert.

What contributed to faster bulk S3+COPY insert.

  • The fact that you do not have to parse insert statement from CSV line.
  • Stream was compressed before multipart upload to S3.
  • COPY command was extremely fast.

I compiled all my findings into one Python script CSV_Loader_For_Redshift

Share:
25,643
Benjamin Crouzier
Author by

Benjamin Crouzier

I am a former developer (rails/react/aws/postgres), and I now study AI and cognitive science full-time. Currently living in Paris. Github profile: https://github.com/pinouchon Blog: http://pinouchon.github.io/

Updated on July 21, 2022

Comments

  • Benjamin Crouzier
    Benjamin Crouzier almost 2 years

    I have a redshift cluster that I use for some analytics application. I have incoming data that I would like to add to a clicks table. Let's say I have ~10 new 'clicks' that I want to store each second. If possible, I would like my data to be available as soon as possible in redshift.

    From what I understand, because of the columnar storage, insert performance is bad, so you have to insert by batches. My workflow is to store the clicks in redis, and every minute, I insert the ~600 clicks from redis to redshift as a batch.

    I have two ways of inserting a batch of clicks into redshift:

    I've done some tests (this was done on a clicks table with already 2 million rows):

                 | multi-row insert stragegy |       S3 Copy strategy    |
                 |---------------------------+---------------------------+
                 |       insert query        | upload to s3 | COPY query |
    -------------+---------------------------+--------------+------------+
    1 record     |           0.25s           |     0.20s    |   0.50s    |
    1k records   |           0.30s           |     0.20s    |   0.50s    |
    10k records  |           1.90s           |     1.29s    |   0.70s    |
    100k records |           9.10s           |     7.70s    |   1.50s    |
    

    As you can see, in terms of performance, it looks like I gain nothing by first copying the data in s3. The upload + copy time is equal to the insert time.

    Questions:

    What are the advantages and drawbacks of each approach ? What is the best practise ? Did I miss anything ?

    And side question: is it possible for redshift to COPY the data automatically from s3 via a manifest ? I mean COPYing the data as soon as new .csv files are added into s3 ? Doc here and here. Or do I have to create a background worker myself to trigger the COPY commands ?

    My quick analysis:

    In the documentation about consistency, there is no mention about loading the data via multi-row inserts. It looks like the preferred way is COPYing from s3 with unique object keys (each .csv on s3 has its own unique name)...

    • S3 Copy strategy:
      • PROS: looks like the good practice from the docs.
      • CONS: More work (I have to manage buckets and manifests and a cron that triggers the COPY commands...)
    • Multi-row insert strategy
      • PROS: Less work. I can call an insert query from my application code
      • CONS: doesn't look like a standard way of importing data. Am I missing something?
  • Benjamin Crouzier
    Benjamin Crouzier over 9 years
    Are you sure that the rows inserted are not compressed ? Where can I find this in the docs ? Can this be solved with a VACUUM and/or ANALYSE ?
  • Sandesh Deshmane
    Sandesh Deshmane over 9 years
    when there is empty table which we created with out any encoding type and we do insert it using insert statement , then no compression is applied. To test encoding for each column fire below command. select "column", type, encoding from pg_table_def where tablename = 'mutable' ..... Try creating new empty table and load data using copy command and fire above query and you will see difference
  • Sandesh Deshmane
    Sandesh Deshmane over 9 years
    @ make sure that to test both cases you create empty table and load data using copy in one table and insert in other table. Make sure you load 10k records see difference in size of table as well. refer this one to see table inspector scripts docs.aws.amazon.com/redshift/latest/dg/…
  • ivan_pozdeev
    ivan_pozdeev almost 8 years
    The results included in the post are too shallow (query size dependence? trends?)
  • Alex B
    Alex B almost 8 years
    @ivan_pozdeev what trends got to do with it?
  • ivan_pozdeev
    ivan_pozdeev almost 8 years
    By trends I mean how comparative times change with different input sizes
  • Alex B
    Alex B almost 8 years
    @ivan_pozdeev makes sense.
  • Daniel Pinyol
    Daniel Pinyol about 5 years
    Hi, @AlexB the python script link to CSV_Loader_For_Redshift is broken