Docker - MySQL Persistent Data Volume Backup and Restore

While the original intention of this post was to demonstrate backing up and restoring a docker volume's contents, there is a lot of useful information about docker in the setup process before getting to the point of backing up and restoring anything. If you are only interested in the backup and restore part of this tutorial, you should just skip to the end. Otherwise, some other things you can expect to find in this "tutorial" include the following:

  • Using docker-compose and yml to compose a couple of containers that communicate with each other

  • Creating a CommandBox Lucee-light@5.3.4+74 docker container

  • Creating a MySQL docker container instance with a persistent data volume that lives outside of the container. We will also use the docker-entrypoint for automatically executing a SQL script to initialize our database and table

  • Backing up that persistent MySQL data volume

  • Restoring that MySQL data backup in a simulated disaster scenario (i.e. we will delete the MySQL container and data volume and recover from a backup

  • Quickly testing/confirming the backup is valid (as much as visually seeing the files in the backup can confirm that)

The biggest chunk of this post is setting up a test environment and seeding it with some data so volume back up and restore can be demonstrated in a real-world scenario. You can skip to the end if you already have a volume running that you'd like to backup and restore.

It's important to point out that this scenario will be backing up the database files to the same host that the containers live on. Without moving those elsewhere with scp or offsite tapes, etc, losing your host is irrecoverable. This is more of a run through of the concepts (and a place for me to keep a reference).

I am performing these tasks on an iMac Pro running macOS Catalina 10.15.7 so, depending on your OS, there may be some things you need to adjust for such as commands for creating directories/files.

First, we need to setup a working directory. The name of the directory will drive container names later so, if you use a different name, be sure to adjust references later. I will just call mine acoderslife.

mkdir -p ~/acoderslife/app
mkdir ~/acoderslife/backup
cd ~/acoderslife
mkdir -p ~/acoderslife/build/mysql/initdb/
touch ~/acoderslife/build/mysql/initdb/init.sql
touch ./docker-compose.yml
touch ./app/Application.cfc
touch ./app/index.cfm
touch ./app/seed.cfm

So what is all that?
- ~/acoderslife - Top -level directory to house everything for the docker environment we are authoring
- ~/acoderslife/app - This is where the web application will live
- ~/acoderslife/build/mysql/initdb/ - This directory will be mounted to a special docker entry point on the MySQL container so all SQL scripts within it are executed (alphabetically) the first time the MySQL server comes online
- ~/acoderslife/build/mysql/initdb/init.sql - Our init script will just create a database and a table. The application will later add data through seed.cfm
- ~/acoderslife/app/docker-compose.yml - The primary source for all our docker environment instructions
- ~/acoderslife/app/Application.cfc - The primary Application.cfc for the Lucee app. It will simply define a datasource
- ~/acoderslife/app/index.cfm - We will use index.cfm to view the MySQL data
- ~/acoderslife/app/seed.cfm - Add some data to the database so we can see it persist through restarts, container delete and recreate and prove our backup/restore did what was expected.

As mentioned, the ~/acoderslife/build/mysql/initdb/init.sql file will be mounted in a special location to allow it to be automatically executed when the MySQL instance comes online. So we can use it to create our database and a test table. The contents of the SQL file will handle that.

CREATE DATABASE acoderslife;

USE acoderslife;

CREATE TABLE `acoderslife_table` (
`uuid` varchar(36) NOT NULL,
`date` timestamp NOT NULL

Once you have an Application.cfc file, add the following content to it. It simply gives the application a random name per request, adds a datasource that points to a database that gets created later and makes that datasource the application's default by putting it in this.datasource.

component { = hash(createuuid());

this.datasources["acoderslife"] = {
class: 'com.mysql.cj.jdbc.Driver'
, connectionString: 'jdbc:mysql: //mysql:3306/acoderslife?useUnicode=true&characterEncoding=UTF-8&useLegacyDatetimeCode=true'
, username: 'root'
, password: "@password123"

this.datasource = "acoderslife";

The ~/acoderslife/app/index.cfm just needs to query our database table and dump the data so we can have a quick visual of what is actually in the database.

<!--- Get all data from acoderslife_table --->
<cfquery name="myQuery">
SELECT * FROM acoderslife_table

<!--- link to seed another record --->
<cfoutput><b><a href="seed.cfm">Add a#myquery.recordcount ? "nother" : ""# record</a></b><br /><br /></cfoutput>

<br /><br />

<!--- output a hash of the data that can be used to compare later after a restore --->
<cfdump var="#hash(serialize(myquery))#" label="DATA HASH" />

<br /><br />

<!--- dump the data to the screen --->
<cfdump var="#myQuery#" label="QUERY DATA" />

The ~/acoderslife/app/seed.cfm file will just create a new record with some random data each time you load the page and then redirect you back to index.cfm. Once everything is up and running, we will hit this page a few times to make sure data exists in the database that we can backup, restore and verify.

<!--- Insert a new random record with a timestamp of now --->
INSERT INTO acoderslife_table
<cfqueryparam cfsqltype="varchar" value="#createuuid()#" />,
<cfqueryparam cfsqltype="timestamp" value="#createOdbcDateTime(now())#" />

<!--- Redirect back to index.cfm to show the data --->
<cflocation url="/" addtoken="false" />

And, finally, the part I've all been waiting for, ~/acoderslife/docker.compose.yml. First, the contents then I will explain what you are looking at. If I had done that with inline comments, it would have made the file look much more complex than it actually is

version: "3.8"


# CFML Engine
image: ortussolutions/commandbox

# environment variables

# bind public port to 8080
- "8080:8080"
- ./app:/app

# MySQL Server
image: mysql
MYSQL_ROOT_PASSWORD: "@password123"
- "33066:3306"
- ./build/mysql/initdb:/docker-entrypoint-initdb.d
- mysqldata:/var/lib/mysql

So, first of all, we are telling docker-compose to use version 3.8 syntax. Which, at the time of this writing, was the latest and greatest syntax for Docker engine 19.03.0+. You can read more about docker-compose versioning here:

The next bit of information in the file is the volumes section. We are simply defining a single volume with a name of mysqldata.

The services section, which is next, is where you will define each of the individual containers to run and how to create them. We are not using any dockerfiles in this instance but, if we were, we could tell the individual service to look for a dockerfile instead of defining all of the information inline. We'll stick to inline for this example.

The first service defined is one named "cfml".
  • image - We are telling the cfml container to use the official ortussolutions/commandbox image to create our "cfml" container.

  • environment - This section is where you define any environment variables you want to be available to your container as well as to your eventual CommandBox and Lucee instances within that container. We are just defining two; CFCONFIG_ADMINPASSWORD is a conventionally named variable that will tell cfconfig (already installed in the official CommandBox image) to set the Lucee admin password. DEBUG is the same as using the --debug flag to start your Lucee instance; it allows debug output to be directed to the console (in our case, it will be output in the container logs)

  • ports - The ports section is where you can bind a container's exposed ports to the host OS's ports. The official CommandBox instance uses port 8080 as the Lucee port and exposes it automatically for us. We could literally just bind port 80 on the host OS to it with 80:8080 but, for now, we'll stick to using 8080 inside and outside of the container

  • volumes - The volumes node is where you would bind any volumes or local directories to a path within the container. Here, we are binding our local ./app directory to /app within the container. This will allow us to make realtime changes to anything in the ./app directory and have them instantly reflected within the container. This is because the container is just linking directly to our ./app directory instead of copying the files into itself.

The second and last service in this example is the mysql container. There are only a couple of differences between it and the previous container.
  • First of all, we obviously named it "mysql".

  • Second, we are using the official mysql image as the base for this service.

  • Next, we are using an environment variable that is already looked for by the mysql docker image to set the MySQL root password (MYSQL_ROOT_PASSWORD). If you change this, you will need to change references to it in Application.cfc as well.

  • Next, we are binding port 33066 on the OS host to the default 3306 mySQL port within the container.

  • Next, we are mounting a couple of things. First, like the previous ./app example, we are mounting the local directory ./build/mysql/initdb to a special location in the myqsl container at /docker-entrypoint-initdb.d. When the container spins up the MySQL instance, it will look for (and find) our init.sql script in this directory and execute it automatically for us. That will create our database and table.

    Finally, the second volume entry will mount the mysqldata volume we created at the beginning of this file to the /var/lib/mysql directory within our container. /var/lib/mysql is where MySQL, by default, will write all of our schema and data information. Since that directory is essentially just a pointer to an external volume, our container will be nothing but a MySQL instance that can be blown away and recreated whenever we want without worrying about deleting the data.

Now that everything is in place, all we need to do is spin up our environment with one command. (make sure the command is ran from ~/acoderslife/
docker-compose up

The first time you run the command, docker will likely not find the images in your local cache so it may take it a minute to download them. Subsequent commands will not have to download the images. Because we did not add the -d switch to the docker-compose up command, we will remain attached to the process and be able to see all of the information roll by as the containers are created and services started.

Once everything is done, you should have two docker containers running:
- acoderslife_mysql_1
- acoderslife_cfml_1

You should also have a new volume named acoderslife_mylsqdata.

The following commands will show all of the above.

docker container ls
docker volume ls

Now, if you open a browser and point to http://localhost:8080, you should see an empty query dump (because we have not put any data into the database yet). At the top of the page should be a link to add a record and a hash of the current dataset. Click that link as many times as you'd like to have some data inserted into the database. Once you are satisfied, make a note of the data and/or the hash so you can use it later to verify that the data restored is accurate.

Now, this may sound a little strange but, we can use another throw away docker container to backup the volume being used by our MySQL instance. Before we do that, we need a place on the local OS/host to store those backups. To keep all the parts of this tutorial together, we can just use ~/acoderslife/backup (which we already created above).

The following docker command will (If you do not have the official ubuntu image in your local registry cache, the command will download it the first time you run it. Subsequent backups will be much faster):
  • spin up a throwaway Ubuntu instance

  • mount our persistent volume to /var/lib/mysql

  • mount our backup directory to /backup within the container

  • package up the contents of our volume in a timestamped tar file and store it in our ~/acoderslife/backup directory

  • exit the container and delete it. You can confirm the container is gone with the "docker container ls" command

  • docker run --rm --volumes-from acoderslife_mysql_1 -v ~/acoderslife/backup:/backup ubuntu bash -c "cd /var/lib/mysql && tar cvf /backup/mysql_$(date "+%Y.%m.%d.%H.%M.%S").tar .”

    If you check the ~/acoderslife/backup directory, you should now see a mysql_.tar file. This is a backup of everything in our volume (which is everything in our /var/lib/mysql directory in the mysql container instance).

    Now you can hit the "Add another record" link a couple more times if you'd like. It will only serve as further proof that our backup was restored because any records you add after the backup is created will no longer exist when that backup is restored.

    When you are ready, it's time to cause some damage! Stop your acoderslife_mysql_1 container and delete it. Then delete your acoderslife_mysqldata volume. This is just to simulate a disaster from which you need to recover. The following commands will take care of the deletions.

    docker stop acoderslife_mysql_1
    docker rm container acoderslife_mysql_1
    docker volume rm acoderslife_mysqldata

    The next step to recover our data is to create the missing volume (that we just deleted).
    docker volume create acoderslife_mysqldata

    Now that our volume is back, we can restore our backup to it. Once again, we can spin up a throwaway ubuntu container to handle this for us. Runt he following command to restore your backup (be sure to adjust the tar file name to match your timestamped file).

    docker run --rm -v acoderslife_mysqldata:/volume -v ~/acoderslife/backup:/backup ubuntu sh -c "cd /volume && tar xvf /backup/mysql_2020." 

    Breaking that down, we are creating a new ubuntu container, mounting our newly created and empty "acoderslife_mysqldata" volume to /volume inside the container. Next, it mounts our ~/acoderslife/backup directory to /backup inside the container. Finally, it simply extracts our tar file from /backup to /volume. When the command is completed, the container will stop and delete itself.

    You could run another command for some visual verification but, essentially, you are all done with the restore and simply need to cd into ~/acoderslife and run the "docker-compose up" command again to bring up the mysql container we deleted.

    If you do want to list the contents of the volume (at any time), a quick, easy way to do that is with, you guessed it, another throwaway container. The following command will spin up an ubuntu instance, mount our volume, list its contents, exit and delete itself. It will only list top-level files and directories; you can change -lah to -lahR to make it recursive.

    docker run --rm -v acoderslife_mysqldata:/volume ubuntu sh -c "ls -lah /volume”

    While that all seemed long and complex, in the end, it was really just one command to backup a volume and one command to restore that backup to it; everything else was just setup to give us a mockup environment in which to perform that task.

    Most Recent Photos