How to copy from CSV file to PostgreSQL table with headers in CSV file?

181,249

Solution 1

This worked. The first row had column names in it.

COPY wheat FROM 'wheat_crop_data.csv' DELIMITER ';' CSV HEADER

Solution 2

With the Python library pandas, you can easily create column names and infer data types from a csv file.

from sqlalchemy import create_engine
import pandas as pd

engine = create_engine('postgresql://user:pass@localhost/db_name')
df = pd.read_csv('/path/to/csv_file')
df.to_sql('pandas_db', engine)

The if_exists parameter can be set to replace or append to an existing table, e.g. df.to_sql('pandas_db', engine, if_exists='replace'). This works for additional input file types as well, docs here and here.

Solution 3

Alternative by terminal with no permission

The pg documentation at NOTES say

The path will be interpreted relative to the working directory of the server process (normally the cluster's data directory), not the client's working directory.

So, gerally, using psql or any client, even in a local server, you have problems ... And, if you're expressing COPY command for other users, eg. at a Github README, the reader will have problems ...

The only way to express relative path with client permissions is using STDIN,

When STDIN or STDOUT is specified, data is transmitted via the connection between the client and the server.

as remembered here:

psql -h remotehost -d remote_mydb -U myuser -c \
   "copy mytable (column1, column2) from STDIN with delimiter as ','" \
   < ./relative_path/file.csv

Solution 4

I have been using this function for a while with no problems. You just need to provide the number columns there are in the csv file, and it will take the header names from the first row and create the table for you:

create or replace function data.load_csv_file
    (
        target_table  text, -- name of the table that will be created
        csv_file_path text,
        col_count     integer
    )

    returns void

as $$

declare
    iter      integer; -- dummy integer to iterate columns with
    col       text; -- to keep column names in each iteration
    col_first text; -- first column name, e.g., top left corner on a csv file or spreadsheet

begin
    set schema 'data';

    create table temp_table ();

    -- add just enough number of columns
    for iter in 1..col_count
    loop
        execute format ('alter table temp_table add column col_%s text;', iter);
    end loop;

    -- copy the data from csv file
    execute format ('copy temp_table from %L with delimiter '','' quote ''"'' csv ', csv_file_path);

    iter := 1;
    col_first := (select col_1
                  from temp_table
                  limit 1);

    -- update the column names based on the first row which has the column names
    for col in execute format ('select unnest(string_to_array(trim(temp_table::text, ''()''), '','')) from temp_table where col_1 = %L', col_first)
    loop
        execute format ('alter table temp_table rename column col_%s to %s', iter, col);
        iter := iter + 1;
    end loop;

    -- delete the columns row // using quote_ident or %I does not work here!?
    execute format ('delete from temp_table where %s = %L', col_first, col_first);

    -- change the temp table name to the name given as parameter, if not blank
    if length (target_table) > 0 then
        execute format ('alter table temp_table rename to %I', target_table);
    end if;
end;

$$ language plpgsql;
Share:
181,249

Related videos on Youtube

Soatl
Author by

Soatl

I am a polyglot developer with a Masters in Computer Science from Georgia Tech. I focus on full-stack development and enjoy learning about cybersecurity principles and machine learning. I also have experience with big data as well as DevOps. #SOreadytohelp

Updated on April 13, 2022

Comments

  • Soatl
    Soatl about 2 years

    I want to copy a CSV file to a Postgres table. There are about 100 columns in this table, so I do not want to rewrite them if I don't have to.

    I am using the \copy table from 'table.csv' delimiter ',' csv; command but without a table created I get ERROR: relation "table" does not exist. If I add a blank table I get no error, but nothing happens. I tried this command two or three times and there was no output or messages, but the table was not updated when I checked it through PGAdmin.

    Is there a way to import a table with headers included like I am trying to do?

    • wildplasser
      wildplasser almost 11 years
      Your table is named table ? Very confusing. Does the table exist, or do you want to create it based on the CSV? (you can't)
    • Soatl
      Soatl almost 11 years
      well, I named it something else, but for this example lets call it table. I tried with and without it existing I also tried to do \copy table(column1, column2, ...) from 'table.csv' delimiter ',' csv; with no luck either. Ideally the table could be created through the CSV alone, and use the headers in that file.
    • G. Cito
      G. Cito over 9 years
    • Achekroud
      Achekroud over 8 years
      Just a heads up for anyone planning on turning a large csv into a postgres table -- postgres is capped at 1600 columns in a single table. You cannot chunk tables into 1600-column-sized ones and then join them after. You need to redesign the db.
    • citynorman
      citynorman over 5 years
      If python is available to you, you can use d6tstack. It takes care of schema changes too.
  • Exocom
    Exocom about 10 years
    I think the problem with this command is, that you have to be the DB superuser. \copy works as normal user, too
  • Daniel Vérité
    Daniel Vérité almost 10 years
    COPY does not create a table or add columns to it, it adds rows to an existing table with its existing columns. Presumably the asker wants to automate the creation of the ~100 columns, and COPY does not have this functionality, as of PG 9.3 at least.
  • G. Cito
    G. Cito almost 10 years
    @Exocom good catch. Since I'm never an admin or superuser for DBs on the postgres systems I use (the pgadmin makes me owner of the databases I use and gives me limited privileges/roles) I must have used `\COPY'. Cheers
  • G. Cito
    G. Cito almost 10 years
    @Daniel I understood the user's table already existed and had all the columns they needed and that they wanted to simply ADD data.
  • brock
    brock over 8 years
    I find that pd.DataFrame.from_csv gives me less trouble, but this answer is by far the easiest way to do this, IMO.
  • joelostblom
    joelostblom over 8 years
    True, I am not sure why I typed pd.read_excel, instead of pd.read_csv. I updated the answer.
  • Achekroud
    Achekroud over 8 years
    this is a fantastic solution for when you do not want to pre-create the table that will hold a large csv. Just a heads up though -- postgres can only take 1600 columns in a table. Apparently other DB engines will allow more. Having this many columns is apparently poor SQL form, although this consensus has yet to filter through to epidemiology.
  • mehmet
    mehmet almost 7 years
    don't forget to change set schema 'data'; to whatever is the case for you
  • citynorman
    citynorman over 5 years
    By default df.to_sql() is VERY SLOW, to speed this up you can use d6tstack. It takes care of schema changes too.
  • Mithril
    Mithril over 5 years
    Got syntax error at or near "HEADER" LINE 2: delimiter ',' CSV HEADER on aws redshift.
  • Ṃųỻịgǻňạcểơửṩ
    Ṃųỻịgǻňạcểơửṩ about 5 years
    This comes directly from postgresql.org/docs/9.2/sql-copy.html, the documentation. Just add the headers.
  • i4cfutures
    i4cfutures about 2 years
    i'm getting the error: ERROR: extra data after last expected column
  • yurenchen
    yurenchen about 2 years
    @i4cfutures the csv columns should be same as table columns, or specify columns in sql as: COPY TB_NAME(COL1,COL2)... if csv has extra column, you'd better delete them before import ( I'm not sure other solution )