How do I backup a mysql database, but at low priority?
Solution 1
If you have a spare server around that can cope with the write load of your server, you can set up replication to that server, and then backup from the slave server. This also has the advantage that you can stop replication while you do the backup and get a consistent snapshot of your data across all databases, or all tables in one database without impacting the database server. This is the set up I always recommend for backing up MySQL if you have the resources.
As a nice bonus, you now have a read-only slave you can use for slow long-running queries.
Solution 2
Andy, by now I'm guessing you've had plenty of time to find a solution. I've recently come up with a solution to this that's working great for me at tsheets, and figured I'd share it.
cstream is a general-purpose stream-handling tool like UNIX dd, usually used in commandline-constructed pipes. The thing that makes cstream useful for us is that it allows you to specify the maximum bandwidth for all input. This means you can limit the disk IO of your mysqldump command with a simple command like this:
mysqldump --single-transaction --quick -u <USER> -p<PASS> <Database> | cstream -t 1000000 > backup.sql
Assuming you're backing up a database that uses all InnoDB tables, the above command is safe (won't affect other queries) and will do your mysqldump while limiting it's disk reads to just one megabyte per second. Adjust the bandwidth with the -t paramater to whatever value will allow your environment to perform the backup without impacting your customer's experience.
Solution 3
FWIW you should also be able to do this with pv (http://linux.die.net/man/1/pv)
mysqldump --single-transaction --quick -u -p | pv --rate-limit 1m > destination (or | nc or | tar cfj backup.bz2 -)
The nice thing about this is the various options for monitoring progress and the -R option which allows you to pass options to an already running process, e.g.; --rate-limit to alter the transfer rate.
Solution 4
if you use innodb you can try xtrabackup with the --throttle option.
you can as well look at ionice and run mysqldump with it.
or maybe you want to enable binary logging in mysql and run full dump once per week / night, while copying bin-logs to safe location every 1-2hours. and.. read-only slave for backup-only purposes is an option as well.
Related videos on Youtube
andyuk
Updated on September 17, 2022Comments
-
andyuk over 1 year
I want to backup a database, but during the day when there is load on the server. It's vital that the backup doesn't impact apache and other databases running on the same server.
It should be possible to use the mysqldump command, but run the command at low priority.
How can I do this?
Update: Looks like simply using nice with mysqldump doesn't work since mysqldump spawns a new process.
-
Admin over 14 yearshave you looked at mysqlhotcopy?
-
Tomas over 10 years@fsb how does mysqlhotcopy support what the OP wants?
-
-
andyuk over 14 yearsUnfortunately this option looks like it is the most work, but I guess its the only real option if I want minimal impact on the main server.
-
Dan Carley over 14 yearsIt's nearly the only way to get a properly quiesced backup if you don't wish to impact operations. Since slowing down
mysqldump
will either cause a long lock to be held or tables to be out-of-sync. -
sucotronic over 11 yearsJust tested and I works like a charm! Thanks for sharing Brandon!
-
dr_ over 5 yearsThat's not true: if tables are InnoDB, no lock is placed on them for reading.