How to set up MySQL database backups quickly, safely and reliably
All owners of MySQL databases are familiar with the problem of daily backups. It is important not only to have a consistent copy of all the necessary databases, but also to backup at a given time, without creating an increased load on the working architecture, without locks and errors.
There are multiple ways to do it, both manual and automatic. They range from setting up MySQL database backup with AutoMySQLBackup and mySQL database automated backups via SSH to performing backups of MySQL database and web server files to an FTP server.
In this article, however, we’ll explain how to achieve the desired results with a utility called MyDumper.MyDumper is suitable for backup of both large databases and very small ones. The main feature of this utility is that it executes backup in parallel in several threads, which is several times faster and more efficient than standard tools like mysqldump.
Now let’s see how to set up automatic mySQL database backups step by step.Mydumper is suitable for backup of both large databases and very small ones. The main feature of this utility is that it executes backup in parallel in several threads, which is several times faster and more efficient than standard tools like mysqldump.
1) Install MyDumper
There are many ways to install MyDumper, but most of those you can find on the Internet are either outdated or never work. That said, you can use the following to install MyDumper for Centos 7:
yum install ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/LenzGr/CentOS_7/x86_64/mydumper-0.9.1-1.4.x86_64.rpm
After the installation is complete, we can request the output of the MyDumper command:
mydumper --help Usage: mydumper [OPTION?] multi-threaded MySQL dumping Help Options: -?, --help Show help options Application Options: -B, --database Database to dump -T, --tables-list Comma delimited table list to dump (does not exclude regex option) -o, --outputdir Directory to output files to -s, --statement-size Attempted size of INSERT statement in bytes, default 1000000 -r, --rows Try to split tables into chunks of this many rows. This option turns off --chunk-filesize -F, --chunk-filesize Split tables into chunks of this output file size. This value is in MB -c, --compress Compress output files -e, --build-empty-files Build dump files even if no data available from table -x, --regex Regular expression for 'db.table' matching -i, --ignore-engines Comma delimited list of storage engines to ignore -m, --no-schemas Do not dump table schemas with the data -d, --no-data Do not dump table data -G, --triggers Dump triggers -E, --events Dump events -R, --routines Dump stored procedures and functions -k, --no-locks Do not execute the temporary shared read lock. WARNING: This will cause inconsistent backups --less-locking Minimize locking time on InnoDB tables. -l, --long-query-guard Set long query timer in seconds, default 60 -K, --kill-long-queries Kill long running queries (instead of aborting) -D, --daemon Enable daemon mode -I, --snapshot-interval Interval between each dump snapshot (in minutes), requires --daemon, default 60 -L, --logfile Log file name to use, by default stdout is used --tz-utc SET TIME_ZONE='+00:00' at top of dump to allow dumping of TIMESTAMP data when a server has data in different time zones or data is being moved between servers with different time zones, defaults to on use --skip-tz-utc to disable. --skip-tz-utc --use-savepoints Use savepoints to reduce metadata locking issues, needs SUPER privilege --success-on-1146 Not increment error count and Warning instead of Critical in case of table doesn't exist --lock-all-tables Use LOCK TABLE for all, instead of FTWRL -U, --updated-since Use Update_time to dump only tables updated in the last U days --trx-consistency-only Transactional consistency only -h, --host The host to connect to -u, --user Username with privileges to run the dump -p, --password User password -P, --port TCP/IP port to connect to -S, --socket UNIX domain socket file to use for connection -t, --threads Number of threads to use, default 4 -C, --compress-protocol Use compression on the MySQL connection -V, --version Show the program version and exit -v, --verbose Verbosity of output, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2
2) Backup execution script
### create sbin directory in /root/ ### mkdir /root/sbin ### create backup directory ### mkdir /backup ### create mysql-backup file in r/root/sbin/ ### nano /root/sbin/mysql-backup
Paste the contents into a file:
ionice -c3 mydumper --user=your-mysql-user --password=your-mysql-user-password -o /backup/`date +%Y-%m-%d_%H-%M` --compress --triggers --events --routines --build-empty-files --less-locking --use-savepoints --success-on-1146 --compress-protocol --threads=2 --verbose=3 /usr/bin/ionice -c3 find /backup/ -type f -mtime +14 -exec rm -f {} \; >/dev/null 2>&1 /usr/bin/ionice -c3 find /backup/ -type d -empty -exec rmdir {} \;
In this script, you need to correct the following data:
--user=(paste here your mysql user, who has rights to databases) --password=(paste here your mysql user password)
and
--threads=(here number of cores on your server / 2, for example your server have 8 cores, so you can set 2-4 threads)
and number of stored backups
-mtime +14 (here indicate how many days to backup, in the current example 14 days are indicated, you can specify your own value)
Save the script and exit. Now use chmod to set the 775 value by utilizing the following command:
chmod 775 /root/sbin/mysql-backup
3) Cron-task for backup
Now we need to create a cron task that will run backup script at the specified time. For that, run this command:
nano /etc/cron.d/mysql-backup
and paste
0 2 * * * root bash -lc "/root/sbin/mysql-backup" > /dev/null
Save end exit.
From now on, you’ll have your daily mySQL database backups: our backup script will be launched every night at 2 am.
It will backup all the databases on the server to the / backup / directory and also delete extra backups as we’vespecified the storage period.