How to move mysql database easiest & fastest way?

23,455

Solution 1

Usually you run mysqldump to create a database copy and backups as follows:

$ mysqldump -u user -p db-name > db-name.out

Copy db-name.out file using sftp/ssh to remote MySQL server:

$ scp db-name.out [email protected]:/backup

Restore database at remote server (login over ssh):

$ mysql -u user -p db-name < db-name.out

OR

$ mysql -u user -p 'password' db-name < db-name.out

How do I copy a MySQL database from one computer/server to another?

Short answer is you can copy database from one computer/server to another using ssh or mysql client.

You can run all the above 3 commands in one pass using mysqldump and mysql commands (insecure method, use only if you are using VPN or trust your network):

$ mysqldump db-name | mysql -h remote.box.com db-name

Use ssh if you don't have direct access to remote mysql server (secure method):

$ mysqldump db-name | ssh [email protected] mysql db-name

OR

$ mysqldump -u username -p'password' db-name | ssh [email protected] mysql -u username -p'password db-name

You can just copy table called foo to remote database (and remote mysql server remote.box.com) called bar using same syntax:

$ mysqldump db-name foo | ssh [email protected] mysql bar

OR

$ mysqldump -u user -p'password' db-name foo | ssh [email protected] mysql -u user -p'password' db-name foo

Almost all commands can be run using pipes under UNIX/Linux oses.

More from Reference

Regards,

Solution 2

If you have Root, you might find it quicker to avoid mysqldump. You can create the DB on the destination server, and copy the database files directly. Assuming user has access to the destination server's mysql directory:

[root@server-A]# /etc/init.d/mysqld stop
[root@server-A]# cd /var/lib/mysql/[databasename]
[root@server-A]# scp * user@otherhost:/var/lib/mysql/[databasename]
[root@server-A]# /etc/init.d/mysqld start

Important things here are: Stop mysqld on both servers before copying DB files, make sure the file ownership and permissions are correct on the destination before starting mysqld on the destination server.

[root@server-B]# chown mysql:mysql /var/lib/mysql/[databasename]/*
[root@server-B]# chmod 660 /var/lib/mysql/[databasename]/*
[root@server-B]# /etc/init.d/mysqld start

With time being your priority here, the use of compression will depend on whether the time lost waiting for compression/decompression (with something like gzip) will be greater than the time wasted transmitting uncompressed data; that is, the speed of your connection.

Solution 3

For an automated way to backup your MySQL database:

The prerequisites to doing any form of backup is to find the ideal time of day to complete the task without hindering performance of running systems or interfere with users. On that note, the size of the database must be taken into consideration along with the I/O speed of the drives - this becomes exponentially more important as the database grows ( another factor that comes to mind is the number of drives, as having an alternate drive where the database is not stored would increase the speed due to the heads not performing both reads and writes.) For the sake of keeping this readable I'm going to assume the database is of a manageable size ( 100MB ) and the work environment is a 9am-5pm job with no real stress or other running systems on off hours.

The first step would be to log into your local machine with root privileges. Once at the root shell, a MySQL user will need to be created with read only privileges. To do this, enter the MySQL shell using the command:

mysql -uroot -ppassword

Next, a user will need to be created with read only privileges to the database that needs to be backed up. In this case, a specific database does not need to be assigned to a user in case the script or process would be used at a later time. To create a user with full read privileges, enter these commands in the MySQL shell:

grant SELECT on *.* TO backupdbuser@localhost IDENTIFIED BY ' backuppassword';
FLUSH PRIVILEGES;

With the MySQL user created, it's safe to exit the MySQL shell and drop back into the root shell using exit. From here, we'll need to create the script that we want to run our backup commands, this is easily accomplished using BASH. This script can be stored anywhere, as we'll be using a cron job to run the script nightly, for the sake of this example we'll place the script in a new directory we create called "backupscripts." To create this directory, use this command at the root shell:

mkdir /backupscripts

We'll also need to create a directory to store our backups locally. We'll name this directory "backuplogs." Issue this command at the root shell to create the directory:

mkdir /backuplogs

The next step would be to log into your remote machine with root credentials and create a "backup user" with the command:

useradd -c "backup user" -p backuppassword backupuser

Create a directory for your backups:

mkdir /backuplogs/

Before you log out, grab the IP address of the remote host for future reference using the command:

ifconfig -a

eth0 is the standard interface for a wired network connection. Note this IP_ADDR.

Finally, log out of the remote server and return to your original host.

Next we'll create the file that is our script on our host local machine, not the remote. We'll use VIM (don't hold it against me if you're a nano or emacs fan - but I'm not going to list how to use VIM to edit a file in here,) first create the file and using mysqldump, backup your database. We'll also be using scp to After the database has been created, compress your file for storage. Read the file to STDOUT to satisfy the instructions. Finally, check for files older than 7 days old. Remove them. To do this, your script will look like this:

vim /backupscripts/mysqldbbackup.sh

#!/bin/sh

# create a temporary file for the schema to be stored 
BACKUPDIR = /backuplogs/
TMPFILE = tmpout.sql
CURRTIME = $(date +%Y%m%d).tgz

#backup your database
mysqldump -ubackupdbuser -pbackuppassword databasename > $BACKUPDIR$TMPFILE

#compress this file and store it locally with the current date
tar -zvcf /backuplogs/backupdb-$CURRTIME $BACKUPDIR$TMPFILE

#per instructions - cat the contents of the SQL file to STDOUT
cat $BACKUPDIR$TMPFILE

#cleanup script
# remove files older than 7 days old
find $BACKUPDIR -atime +7 -name 'backup-db-*.tgz' -exec rm {} \;

#remove the old backupdirectory from the remote server
ssh backupuser@remotehostip find /backuplogs/  -name 'backup-db-*.tgz' -exec rm {} \;

#copy the current backup directory to the remote server using scp
scp -r /backuplogs/ backupuser@remotehostip:/backuplogs/

#################
# End script
#################

With this script in place, we'll need to setup ssh keys, so that we're not prompted for a password every time our script runs. We'll do this with SSH-keygen and the command:

ssh-keygen -t rsa

Enter a password at the prompt - this creates your private key. Do not share this.

The file you need to share is your public key, it is stored in the file current_home/.ssh/id_rsa.pub. The next step is to transfer this public key to your remote host . To get the key use the command:

cat current_home/.ssh/id_rsa.pub 

copy the string in the file. Next ssh into your remote server using the command:

ssh backupuser@remotehostip

Enter your password and then edit the file /.ssh/authorized_keys. Paste the string that was obtained from your id_rsa.pub file into the authorized_keys file. Write the changes to the file using your editor and then exit the editor. Log out of your remote server and test the RSA keys have worked by attempting to log into the remote server again by using the previous ssh command. If no password is asked for, it is working properly. Log out of the remote server again.

The final thing we'll need to do is create a cron job to run this every night after users have logged off. Using crontab, we'll edit the current users file (root) as to avoid all permission issues. *Note - this can have serious implications if there are errors in your scripts including deletion of data, security vulnerabilities, etc - double check all of your work and make sure you trust your own system, if this is not possible then an alternate user and permissions should be set up on the current server *. To edit your current crontab we'll issue the command:

crontab -e

While in the crontab editor (it'll open in your default text editor), we're going to set our script to run every night at 12:30am. Enter a new line into the editor:

30 0 * * * bash /backupscripts/mysqldbbackup.sh

Save this file and exit the editor. To get your cronjob to run properly, we'll need to restart the crond service. To do this, issue the command:

/etc/init.d/crond restart
Share:
23,455
user1031143
Author by

user1031143

Updated on October 15, 2020

Comments

  • user1031143
    user1031143 over 3 years

    Hi i have to move mysql database to another server ,

    It's nearly 5 gb

    i can have root access at both servers?

  • Alex Howansky
    Alex Howansky about 12 years
    Make sure you use the extended inserts option on your dump (I think it might be on by default, but not positive) or your restore speed will suffer significantly.
  • Oren A
    Oren A over 10 years
  • David
    David over 10 years
    This is by far the easiest method. Not sure why this isn't up higher. The MySQL server is then a carbon copy of the original. I just cloned an entire webserver onto a unique server in about 5 mins.
  • Bathakarai
    Bathakarai over 10 years
    I did the above steps, but i get "ERROR 1146 (42S02): Table 'sample.user' doesn't exist", When i try to print the user table. Any idea ??
  • Umbrella
    Umbrella over 10 years
    Ideas: files for user table in wrong path, wrong file permissions, or servers configured differently.