Now, suppose that we have a catastrophic crash on Wednesday at 8 a.m. that requires recovery from backups. To recover, first we restore the last full backup we have (the one from Sunday 1 p.m.). The full backup file is just a set of SQL statements, so restoring it is very easy:
shell> mysql < backup_sunday_1_PM.sql
At this point, the data is restored to its state as of Sunday 1
p.m.. To restore the changes made since then, we must use the
incremental backups; that is, the
gbichot2-bin.000007
and
gbichot2-bin.000008
binary log files. Fetch
the files if necessary from where they were backed up, and then
process their contents like this:
shell> mysqlbinlog gbichot2-bin.000007 gbichot2-bin.000008 | mysql
We now have recovered the data to its state as of Tuesday 1
p.m., but still are missing the changes from that date to the
date of the crash. To not lose them, we would have needed to
have the MySQL server store its MySQL binary logs into a safe
location (RAID disks, SAN, ...) different from the place where
it stores its data files, so that these logs were not on the
destroyed disk. (That is, we can start the server with a
--log-bin
option that specifies a
location on a different physical device from the one on which
the data directory resides. That way, the logs are safe even if
the device containing the directory is lost.) If we had done
this, we would have the gbichot2-bin.000009
file (and any subsequent files) at hand, and we could apply them
using mysqlbinlog and
mysql to restore the most recent data changes
with no loss up to the moment of the crash:
shell> mysqlbinlog gbichot2-bin.000009 ... | mysql
For more information about using mysqlbinlog to process binary log files, see Section 6.5, “Point-in-Time (Incremental) Recovery Using the Binary Log”.
User Comments
Beware restoring too many binary logfiles like this.
Each one that you restore will be converted into a temporary file of SQL statements, eg SQL_LOAD_MB-1-0, SQL_LOAD_MB-2-0,
etc. This can easily fill up /tmp.
To work around this, set TMPDIR to some place with lots of space. For extra peace of mind, save the output to a file
instead of feeding it directly to the mysql client. You can
have the client source it when you think you have something reasonable.
#!/bin/sh
TMPDIR=/my/big/disk/tmp; export TMPDIR
mysqlbinlog binlog.* > binlogdump.sql
You will have to clean up the directory afterwards.
Make sure you do this before rerunning mysqlbinlog.
I did this with 6.4G of binlogs. It produced 6.4G of tmpfiles, but a final binlogdmp.sql of only 124Mb.
The other option is to loop over the list of files, appending to binlogdump.sql as you go.
This produces a lot of warnings but a very similar sql file. The differences are not easy to describe succinctly,
but basically the first approach is more robust.
Add your own comment.