Import Multiple .sql dump files into mysql database from shell

64,479

Solution 1

cat *.sql | mysql? Do you need them in any specific order?

If you have too many to handle this way, then try something like:

find . -name '*.sql' | awk '{ print "source",$0 }' | mysql --batch

This also gets around some problems with passing script input through a pipeline though you shouldn't have any problems with pipeline processing under Linux. The nice thing about this approach is that the mysql utility reads in each file instead of having it read from stdin.

Solution 2

One-liner to read in all .sql files and imports them:

for SQL in *.sql; do DB=${SQL/\.sql/}; echo importing $DB; mysql $DB < $SQL; done

The only trick is the bash substring replacement to strip out the .sql to get the database name.

Solution 3

There is superb little script at http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script which will take a huge mysqldump file and split it into a single file for each table. Then you can run this very simple script to load the database from those files:

for i in *.sql
do
  echo "file=$i"
  mysql -u admin_privileged_user --password=whatever your_database_here < $i
done

mydumpsplitter even works on .gz files, but it is much, much slower than gunzipping first, then running it on the uncompressed file.

I say huge, but I guess everything is relative. It took about 6-8 minutes to split a 2000-table, 200MB dump file for me.

Solution 4

I don't remember the syntax of mysqldump but it will be something like this

 find . -name '*.sql'|xargs mysql ...

Solution 5

I created a script some time ago to do precisely this, which I called (completely uncreatively) "myload". It loads SQL files into MySQL.

Here it is on GitHub

It's simple and straight-forward; allows you to specify mysql connection parameters, and will decompress gzip'ed sql files on-the-fly. It assumes you have a file per database, and the base of the filename is the desired database name.

So:

myload foo.sql bar.sql.gz

Will create (if not exist) databases called "foo" and "bar", and import the sql file into each.

For the other side of the process, I wrote this script (mydumpall) which creates the corresponding sql (or sql.gz) files for each database (or some subset specified either by name or regex).

Share:
64,479
Derek Organ
Author by

Derek Organ

Founder of SaaS product for timesheet management. http://1timetracking.com Follow me on twitter http://twitter.com/jeebers

Updated on July 11, 2022

Comments

  • Derek Organ
    Derek Organ almost 2 years

    I have a directory with a bunch of .sql files that mysql dumps of each database on my server.

    e.g.

    database1-2011-01-15.sql
    database2-2011-01-15.sql
    ...
    

    There are quite a lot of them actually.

    I need to create a shell script or single line probably that will import each database.

    I'm running on a Linux Debian machine.

    I thinking there is some way to pipe in the results of a ls into some find command or something..

    any help and education is much appreciated.

    EDIT

    So ultimately I want to automatically import one file at a time into the database.

    E.g. if I did it manually on one it would be:

    mysql -u root -ppassword < database1-2011-01-15.sql
    
  • Derek Organ
    Derek Organ over 13 years
    from doing a bit of research that is what I'm coming up with too. e.g. find . -name '*.sql' | xargs mysql -u root -ppassword Would that work?
  • Navi
    Navi over 13 years
    and -h host if you are on a remote server and probably also need to specify database name
  • Derek Organ
    Derek Organ over 13 years
    the database is referenced in each backupfile so the single line above works already and it is the localhost
  • Derek Organ
    Derek Organ over 13 years
    In the end i used the cat *.sql but broke it down by letter so there wasn't too much info. e.g. cat a*.sql | mysql -u root -ppass
  • Arul Kumaran
    Arul Kumaran over 12 years
    I used ls -1 *.sql | awk '{ print "source",$0 }' | mysql --batch -u {username} -p{password} {dbname} as I have named my sql files sequentially and wanted to execute in that order
  • Swader
    Swader over 11 years
    Like a charm, thanks for this! Worked better than the accepted answer for me.
  • Satya Kalluri
    Satya Kalluri about 11 years
    @Luracast I used ls -1 *.sql | awk '{ print "source",$0 }' | mysql --batch -u {username} -p {dbname} to get it working. The password needs to be entered in the console when it actually prompts for it, not in the command itself. MySQL doesn't accept password in the command.
  • michelson
    michelson about 11 years
    @satya I believe that you can enter the password on the command line if you use --password=PA55w0rd instead of -p. I haven't tinkered with MySQL in quite some time, but I'm pretty sure that would work.
  • David Woods
    David Woods over 10 years
    @D.Shawley and @satya - you can enter the password on the command line with -p you just omit the space. e.g. mysql -u me -pPA55w0rd
  • e18r
    e18r over 7 years
    is --batch really necessary?
  • sneaky
    sneaky over 4 years
    Or use pv for progress: pv -p *.sql | mysql database taken by: https://stackoverflow.com/a/59139471/8398149