5
Wack
5y

How do you restore partial data from a mysql backup? Don't worry, nothing is wrong, I'm just thinking about how would I restore something if shit hits the fan.

Our current strategy for database backups is to just run mysqldump during the night, using a cronjob (feel freue to suggest a better way ;))

1) Restoring the full db: just read that sql file into the mysql command.

2) Restoring just one (or some) tables: open the file in an IDE, just select the lines you're after, copy them to a new file, read that one (possible issues: let's say we have a table B to which entries of table A are related and we just want to restore table A. We can't nuke table B too, as also table C is refering to it, so we have to do some orphant removal in B afterwards)

3) Restoring selected entries in specific tables: setup a new db, read the full backup in there, dump these entries to a new file and read that into the real db

How do you so it? Any better aproaches/tools?

Comments
  • 2
    We use mysqldump as well in our pipeline

    Restoring is as easy as deleting the container and starting a new one with the dump
  • 2
    If you work with full & incremental its pretty easy
    You just restore the full backup, then restore the incremental to bring it back up to date.

    https://dev.mysql.com/doc/...

    I do this in the cli when it's actually been needed, I've only had to do this like twice over the last 4 years 🤷‍♂️
    Handy to know but generally mysql/mariadb is pretty sturdy unless something serious goes wrong.

    Otherwise for migrations, I just take a dump, shutdown the mysql server there and then, and import the dump into the new server and connect it up to the Web instance.

    Down time only lasts while you wait for the export/import to occur.
  • 1
    If you need to restore only part of a backup, restore to a new db/table and use SQL to transfer any missing or broken records.

    I assume the use case is if you changed or deleted part of the data but not lost the whole lot and wish to keep later changes.
  • 0
    "Just asking for a friend..."
  • 0
    I have some timestamps to check, like created and updated, occasionally soft_deleted, and of course there's always the primary key.

    If the PK doesn't exists then Insert, else only update the values from backup when updated in backup is newer. etc.
  • 0
    Take a look at mydumper, it's way faster than mysqldump and it can export all tables in separate files.

    https://github.com/maxbube/mydumper
  • 2
    @alexbrooklyn wait, are you running the DB in a container? I thought you shouldn't do that in prod, as it either fucks it up or at least slows it down, due to the "mounting" of the data...

    @C0D4 currently we're just running daily full backups. I should implement some incrementals too!

    @ethernetzero what exactly is the advantage, don't you in most cases then have to import multiple files and either disable foreign key checks while importing or import them in the right order?
  • 1
    @Wack It's multithreaded and it can parallelize the dumping and loading operations, it's really fast. Also, it has options to reduce the necessary locking while still performing a consistent backup.

    The resulting files are SQL scripts like those produced by mysqldump, so you can use them directly if you want, but mydumper provides another tool, myloader, to quickly import them in parallel too.

    As for restoring only a few selected tables, the directory where mydumper writes the schema and data dumps also includes a metadata file. If you want to restore only a few tables you can copy the schema and data dumps for those tables along with the metadata file to a new directory and tell myloader to use that directory.
Add Comment