Redshift Spectrum: Automatically partition tables by date/folder

12,309

Solution 1

Solution 1:

At max 20000 partitions can be created per table. You can create a one-time script to add the partitions (at max 20k) for all the future s3 partition folders.

For eg.

If folder s3://bucket/ticket/spectrum/sales_partition/saledate=2017-12/ doesn't exist, you can even add partition for that.

alter table spectrum.sales_part
add partition(saledate='2017-12-01') 
location 's3://bucket/tickit/spectrum/sales_partition/saledate=2017-12/';

Solution 2:

https://aws.amazon.com/blogs/big-data/data-lake-ingestion-automatically-partition-hive-external-tables-with-aws/

Solution 2

Another precise way to go about it: Create a Lambda job that is triggered on the ObjectCreated notification from the S3 bucket, and run the SQL to add the partition:

alter table tblname ADD IF NOT EXISTS PARTITION (partition clause) localtion s3://mybucket/localtion

Share:
12,309

Related videos on Youtube

GoatInTheMachine
Author by

GoatInTheMachine

Updated on September 18, 2022

Comments

  • GoatInTheMachine
    GoatInTheMachine about 1 year

    We currently generate a daily CSV export that we upload to an S3 bucket, into the following structure:

    <report-name>
    |--reportDate-<date-stamp>
        |-- part0.csv.gz
        |-- part1.csv.gz
    

    We want to be able to run reports partitioned by daily export.

    According to this page, you can partition data in Redshift Spectrum by a key which is based on the source S3 folder where your Spectrum table sources its data. However, from the example, it looks like you need an ALTER statement for each partition:

    alter table spectrum.sales_part
    add partition(saledate='2008-01-01') 
    location 's3://bucket/tickit/spectrum/sales_partition/saledate=2008-01/';
    
    alter table spectrum.sales_part
    add partition(saledate='2008-02-01') 
    location 's3://awssampledbuswest2/tickit/spectrum/sales_partition/saledate=2008-02/';
    

    Is there any way to set the table up so that data is automatically partitioned by the folder it comes from, or do we need a daily job to ALTER the table to add that day's partition?

  • GoatInTheMachine
    GoatInTheMachine about 6 years
    Thats perfect, I have no idea why I didn't think of that, thank you
  • Davos
    Davos over 4 years
    The docs for this also says "You can add multiple partitions ... if you use the AWS Glue catalog, you can add up to 100 partitions in a single statement" just by repeating the 2nd and 3rd lines in your example for each partition. docs.aws.amazon.com/redshift/latest/dg/…
  • Karl Anka
    Karl Anka about 3 years
    I can't find any info on maximum number of partitions in AWS docs. Could you link? Update: I created 20001 partitions for a table without any problems. Maybe the 20k limit is deprecated.