MySQL efficiently copy all records from one table to another
Solution 1
There's just one thing you're missing. Especially, if you're using InnoDB, is you want to explicitly add an ORDER BY clause in your SELECT statement to ensure you're inserting rows in primary key (clustered index) order:
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id
Consider removing secondary indexes on the backup table if they're not needed. This will also save some load on the server.
Finally, if you are using InnoDB, reduce the number of row locks that are required and just explicitly lock both tables:
LOCK TABLES product_backup WRITE;
LOCK TABLES product READ;
INSERT INTO product_backup SELECT * FROM product ORDER BY product_id;
UNLOCK TABLES;
The locking stuff probably won't make a huge difference, as row locking is very fast (though not as fast as table locks), but since you asked.
Solution 2
mysqldump -R --add-drop-table db_name table_name > filepath/file_name.sql
This will take a dump of specified tables with a drop option to delete the exisiting table when you import it. then do,
mysql db_name < filepath/file_name.sql
Solution 3
DROP
the destination table:
DROP TABLE DESTINATION_TABLE;
CREATE TABLE DESTINATION_TABLE AS (SELECT * FROM SOURCE_TABLE);
Solution 4
I don't think this will be worthy for a 50k table but: If you have the database dump you can reload a table from it. As you want to load a table in another one you could change the table name in the dump with a sed command: Here you have some hints: http://blog.tsheets.com/2008/tips-tricks/mysql-restoring-a-single-table-from-a-huge-mysqldump-file.html
An alternative (depending on your design) would be to use triggers on the original table inserts so that the duplicated table gets the data as well.
And a better alternative would be to create another MySQL instance and either run it in a master-slave configuration or in a daily dump master/load slave fashion.
crmpicco
Senior Analyst Developer (MoodleCloud) at Moodle, AWS Solutions Architect (Associate), Zend Certified Engineer and Google Analytics Qualified Individual
Updated on June 04, 2020Comments
-
crmpicco almost 4 years
Is there a more-efficent, less laborious way of copying all records from one table to another that doing this:
INSERT INTO product_backup SELECT * FROM product
Typically, the
product
table will hold around 50,000 records. Both tables are identical in structure and have 31 columns in them. I'd like to point out this is not my database design, I have inherited a legacy system. -
crmpicco almost 12 yearsI should have added that I am doing this in PHP code. It will be done before a series of INSERTs and UPDATEs are carried out on the
product
table, so I would be looking to do it in PHP code rather than MySQL administration functions. -
crmpicco almost 12 yearsI am using the MyISAM engine. I mentioned in an earlier post that I have inherited a legacy system, so it's MyISAM for the moment.
-
Danny Staple over 10 yearscouldn't ordering that create an index tree that is unbalanced? In which case random order may be better for the primary key?
-
Marcus Adams over 10 years@DannyStaple, the tree is flat when inserting into an index in sorted order with MyISAM. This improves performance (not having to rebuild index), as well as saves space. From MySQL documentation: "When rows are inserted in sorted order (as when you are using an AUTO_INCREMENT column), the index tree is split so that the high node only contains one key. This improves space utilization in the index tree."
-
Martin over 7 yearswhat if there is data already in the destination table that OP wants to preserve and add to?
-
Roberto Sepúlveda Bravo almost 7 yearswhy drop and create table instead of DELETE FROM destination_table; INSERT INTO destination_table SELECT * FROM product? Which one is more efficient?
-
Epirocks over 5 yearsDrop will reset primary auto increment ids
-
Torque almost 5 yearsThe second row in the second code block should read
LOCK TABLES product;
,product_id
is the column. -
gowthz over 2 yearsUsing this for Django managed DB broke the table. Even though the primary key and auto increment directives were correctly setup, new rows were being added with same primary key! (without incrementing). But weirdly I could not delete the rows once they were added as it would raise 'Unexpected update count received' error. I had to delete and restore my schema to fix this issue.