Piping mysqldump to mysql

24,205

Solution 1

The problem may be that the load on the servers gets too high doing both dumping and loading at the same time. This also means that you lose some optimizations like extended inserts, ability to disable foreign keys which can be achieved when you dump a file then import it.

I would recommend that you use mysqldump to generate the backup then load it using mysql. That way the load on your server is reduced, and like you said it always works. You can even automate it into a bash script to do both so that you do not need to execute the mysqldump and loading commands.

Solution 2

"MySQL server has gone away" is a symptom of a max packet error. http://dev.mysql.com/doc/refman/5.0/en/gone-away.html

Modify your command to specify a larger value for max_allowed_packet.

mysqldump --opt db1 | mysql --max_allowed_packet=32M db2

The default is 1M. It may take trial and error to get the right value. http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html#sysvar_max_allowed_packet

Solution 3

Do you need to redirect the stderr stream as well as the stdout from the mysqldump? Error messages may be getting interleaved with the dump output. Try

mysqldump --opt db1 | mysql db2

Solution 4

The problem is that you are redirecting stderr to stdout, thus any errors are being interpreted as SQL. Remove 2>&1. Then the real error will show up.

Share:
24,205
parserr
Author by

parserr

Updated on March 08, 2020

Comments

  • parserr
    parserr about 4 years

    Sometimes I need to copy MySQL database (db1) to another database (db2). I found this command to be concise and effective:

    mysqldump --opt db1 | mysql db2
    

    It was working fine, but now it breaks with following error:

    ERROR 1064 (42000) at line 1586: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysqldump: Couldn't execute 'SHOW TRIGGERS LIKE 'some_table_name'': MySQL server ' at line 1

    First thing that comes to mind is that database is too big (uncompressed SQL dump is >1G, 1090526011 bytes at the moment, to be precise) for piping it like this. When I do mysqldump > file and then mysql < file it works fine, no errors. Table mentioned in error message (some_table_name) is not big or special.

    Second idea comes from the impression that error message might be truncated, and that it says

    "...MySQL server has gone away"

    Quick research on that says it's possible that maximum number of open files (for MySQL and/or system) is reached. So I've tried adding --skip-lock-table to mysqldump and raising open-files-limit, but no luck, same error.

    Obvious solution is to do dump and then import (as it works fine), but piping seems better and more clean to me (let me know if I'm wrong), plus I'm curious to find out what causes this problem. Did I hit some limit that affects command piping?

    I've been doing this on hosting server, running MySQL 5.1.60 on Linux and on my dev machine - MySQL 5.1.58 on Linux. Latter gives a bit different error:

    mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table other_table_name at row: 7197


    UPDATE: Problem is solved by doing separate dump and import, without pipe. Even though I feel it's not really answer to my question, ssmusoke's suggestions were most to the point resulting in accepted answer.

  • parserr
    parserr about 12 years
    Actually, that redirection is leftover, I was testing without it, sorry. Either way, when i redirect dump to file, it goes without errors, so should with pipe, right?
  • parserr
    parserr about 12 years
    I really don't, but that's the leftover, was testing without it, sorry for misguiding you, I'll edit the question.
  • parserr
    parserr about 12 years
    I doubt the load is that high that connection drops. Even if it is, I would be weird that it always fails at the same place (same table), right? Anyway, this process was scripted, but when it started to fail I had to take it out and start analyzing 'by hand', and now it's (I hope temporarily) working through file. Nevertheless, I'd like to know what's the reason it fails this way. Thanks for suggestions.
  • Stephen Senkomago Musoke
    Stephen Senkomago Musoke about 12 years
    I am assuming that the two databases are on the same server. On the source database mysqldump locks the tables to generate the query which uses up more resources than usual. On the destination database each SQL statement is executed - indexes updated, logs created etc, which also uses up alot of resources. The sql script on the other hand contains some optimizations like foreign key constraints not being checked for each row inserted, extended inserts which are batched up