Back to Engineering Logs

I Migrated a 2GB MySQL Database at 3 AM (And Lived to Tell the Tale)

5 min read
mysql
devops
migration
I Migrated a 2GB MySQL Database at 3 AM (And Lived to Tell the Tale)

That adrenaline rush that hits you when you realize your client’s only maintenance window is between 2 AM and 5 AM on a Sunday. No staging environment, no second server for trial and error. Just you, a terminal, and a 2GB MySQL database powering the entire business.

It wasn’t my first migration, nor my last, but this one had a catch: the source was a legacy MySQL 5.7 instance on shared hosting with a severely restricted SSH connection (no way to install external tools). The target was a freshly spun-up MySQL 8 container in my Docker infrastructure. The client needed zero data loss and, ideally, under 30 minutes of downtime.

My first instinct was to use mysqldump, but that was a quick no-go…

The problem with mysqldump on a 2GB database over a slow network is time. The dump took 40 minutes just to generate. Downloading it? Another 25. By the time I had the .sql file locally, an hour had passed and I hadn’t even started importing.

So, let’s stream the dump instead…

Instead of dumping, downloading, and importing as three separate steps, I chained them into a single pipeline launched directly from the new server:

# Yes, passing the password in plain text in the CLI is a security sin that dirties the bash_history.
# But it was 3 AM and the source server was going to be destroyed the next day. Don't judge me.
ssh josema@server "mysqldump -u db_user -p'EL_PASS' --single-transaction --routines --triggers mydb" | pv -pterb | mysql -u root -p'EL_PASS' -h 127.0.0.1 mydb_new

By running the command from the new server’s terminal against the source, the data transfer runs at data-center speeds. The —single-transaction flag is key. It tells MySQL to take a consistent snapshot using InnoDB’s Concurrency Control, meaning the source database keeps serving requests while we drain its data. No table locks. The pv command gives me a real-time progress bar, just to give me an idea of how the process is going.

Since the old site stayed online, the only trade-off was accepting that any writes occurring during that window would be lost. At 3 AM on a Sunday, that’s an acceptable calculated risk. This slashed the total time from over an hour down to about 28 minutes. Once it finished, I just had to flip the DNS to the new IP. Zero downtime for the user.

Moving on to the next problem… character encoding

After the import, some product descriptions had those classic “weird” characters. The old database defaulted to latin1, but the new MySQL 8 uses utf8mb4. So I had to run a conversion on the affected columns (thankfully, not that many tables):

ALTER TABLE products CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
...

This is the classic detail that makes you start Monday morning “on the right foot”—yeah, stepping right on your phone that won’t stop ringing because the client noticed those characters on their product pages. These are silent errors; they don’t break the site, but they definitely make the client panic.

Post-migration verification

Before making the final DNS leap, I ran a row-count comparison across every table and checked the checksums of the critical ones. It sounds paranoid, but I’ve learned that the cost of verifying is always far lower than the cost of assuming.

By 4:15 AM, the new database was permanently online. The app was running faster on MySQL 8, naturally, thanks to query optimizer improvements. The client noticed absolutely nothing, which, in our world, is the highest compliment.

Database migrations don’t have to be terrifying. But they demand respect. If you treat them like a routine chore, they’ll remind you of that when you least expect it. Plan the pipeline, double-check the encoding, and always, always, always have a rollback script ready before you get to work. And, above all, don’t underestimate the power of a good cup of coffee at 3 AM. Because, at the end of the day, all that really matters is that the site keeps running without the client ever noticing a thing.

From the Lab

This experiment was conducted by Ionastec. Need this level of technical rigor for your business?

Consult Ionastec