MySQL database backup over network

5,040

Solution 1

You can do this without an intermediary file, as well as resetting the replication pointers in the process, so you don't miss any updates (and have to resync again)

  1. Stop replication the slave

    slave> mysql 'slave stop;'

  2. dump the master to the slave, using the --master-data=1 flag

    master> mysqldump -e --master-data=1 --single-transaction $DATABASE | ssh -C user@slave 'mysql $DATABASE'

  3. start replication on the slave

    slave> mysql 'slave start'

--master-data=1 causes mysqldump to emit the CHANGE MASTER TO ... settings at the top of the dump to set the replication binlog and offset to the exact point in the masters binlog at the time the dump was taken

-e uses the extended output format, basically multiple sets of value per insert statement, which is more efficent both on the wire, and when being applied to the slave.

--single-transation tells mysql to open a transaction over the whole dump, rather than using LOCK TABLES.

Solution 2

Quick and dirty way (Starting from Server #1):

mysqldump -u root -p bigdb | bzip2 -c | ssh -T user@server2 "cat > backup.sql.bz2"

Solution 3

You can dump mysql databases from a remote host, just use the --host or -h argument with mysqldump

server2# mysqldump -h server1 -u root -p --opt | gzip > database.sql.gz
server2# zcat database.sql.gz | mysql -u root -p

You could obviously skip the dump to disk, but imports tend to be slower than dumping. If your CPU on server2 is a bottleneck and your disk is fast then you may want to skip the gzip step, so you minimise the downtime on your master server.

Obviously, my answer skips the details to do with recording replication details and making sure you have a consistent dump for replication, as these are dealt with in the MySQL manual.

Share:
5,040
Chrizmo
Author by

Chrizmo

I am a Principal Front-End Engineer in Northern Idaho.

Updated on September 18, 2022

Comments

  • Chrizmo
    Chrizmo over 1 year

    Server #1 is a MySQL database server running on Debian which contains numerous tables and one particular table that is over 100GB.

    Server #2 is used as a MySQL slave for replication, but now it needs to be reset and the replication reinitialized due to issues that arose.

    There is not presently enough room on the hard drive of server #1 to do a full database dump (i.e. less than 100GB of free space). Aside from upgrading the hardware which would require downtime, what would be the best way to get the database dump from server #1 to server #2 intact, without corruption, and without filling up the hard drive on the server #1 in the process?

  • Dana the Sane
    Dana the Sane almost 15 years
    The nfs host should definitely be separate from the two machines, the replication falls apart if a main server has a disk crash.
  • Toby
    Toby almost 15 years
    I'd even pipe it directly into the sql slave server aside from that I think that's probably the easiest way to recover a state to start replicating again...
  • TechnoTony
    TechnoTony almost 15 years
    Those are really good options. Depending on their network bandwidth it may be faster than mine.
  • Dave Cheney
    Dave Cheney almost 15 years
    No, myisam does not support transactions. You really should consider converting to innodb