How to export a big MySQL database?

by kiawin

If you have large production MySQL database that you wish to backup or replicate for testing, it is important to handle the export gently :)

For me, I use mysqldump. Remember that you must include several optional flags, which will speed up the export and most importantly avoid stressing your production MySQL.

mysqldump -u[username] -p[password] [database_name] --no-autocommit --quick --opt > dump.sql

–no-autocommit: Insert “SET AUTOCOMMIT=0;” and “COMMIT;” into the SQL dump. Enable speedier import later on.

–quick It retrieve a row at a time from MySQL instead of the entire rowset. It is very useful for dumping large databases.

–opt: This is enabled by default and it is a shorthand for –add-drop-table, –add-locks, –create-options, –disable-keys, –extended-insert, –lock-tables, –quick, and –set-charset.

If your table uses InnoDB, you should include the following flags:


–single-transaction: enable to dump a consistent state of a database without blocking any application.

For MyISAM tables,

--disable-keys --lock-tables=false

–disable-keys: Insert “/*!40000 ALTER TABLE tbl_name DISABLE KEYS */;” and “/*!40000 ALTER TABLE tbl_name ENABLE KEYS */;” into the dump. Enable speedier import later on. This flag is part of the –opt shorthand.

–lock-tables=false: Though disabling table locking and not able to ensure consistent state of the database, this will ensure no blocking to any application.

If you have any tables that you wish to skip, you may include the following flag:


Lastly, if you have concern on heavy Disk I/O and diskspace owing to the dump, you may perform the operation remotely on another machine:

ssh -C [] "mysqldump ..." > dump.sql

ssh -C: Enables compression for all data during SSH session. This is useful when streaming large dump from MySQL server remotely.

Quite likely, you may need to forward your SSH key using SSH Agent if your servers authenticate using SSH key.

For more info, you may read the details @ MySQL Manual.

Happy dumping!