Cloud Native 8 min read

Running MySQL in Docker Using the Autopilot Pattern and Containerbuddy

This article explains how to deploy a stateful MySQL service in Docker containers using the Autopilot pattern, Containerbuddy, Consul, and Percona XtraBackup to achieve automated bootstrapping, scaling, health‑checking, failover, and self‑healing without manual intervention.

Art of Distributed System Architecture Design
Art of Distributed System Architecture Design
Art of Distributed System Architecture Design
Running MySQL in Docker Using the Autopilot Pattern and Containerbuddy

Autopilot Pattern for MySQL

The Autopilot Pattern embeds lifecycle automation inside each container, removing the need for an external orchestrator to handle service discovery, topology changes, backups, and failover. The pattern is demonstrated with a stateful MySQL deployment that runs entirely inside Docker.

Architecture Components

Percona Server 5.6 – MySQL‑compatible database engine. Hot‑snapshot backups are created with XtraBackup.

Consul – Key‑value store used for service discovery, health checks, and an atomic lock that guarantees a single primary at any time.

Manta – Joyent object storage where snapshot files and streamed binlogs are persisted.

Containerbuddy – Runs as PID 1 inside the MySQL container. It registers onStart, health, and onChange handlers that interact with Consul keys and checks.

triton-mysql.py – Small Python helper invoked by Containerbuddy to perform heavy‑weight tasks such as creating backups, streaming binlogs, and synchronising replicas.

All source code and configuration files are available at https://github.com/your‑org/autopilot‑mysql (replace with the actual repository URL).

Process Tree Inside the Container

When a MySQL node starts, Containerbuddy launches triton-mysql.py, forks the MySQL daemon, and starts the health and change handlers. The resulting process tree looks like:

root@993acf351cd9:/# ps axo uid,pid,ppid,stime,cmd
UID    PID  PPID  STIME  CMD
root     1     0  19:02  /bin/containerbuddy
mysql   94     1  19:02  |_ mysqld --console --gtid-mode=ON ...
root   107     1  19:04  |_ python /bin/triton-mysql.py health
root   109     1  19:04  |_ /usr/bin/innobackupex --no-timestamp ...
root   120     1  19:06  |_ python /bin/triton-mysql.py health
root   121     1  19:06  |_ mysql -u repl -p...

Self‑Assembly

Only a few Docker images are required, so no external scheduler is needed. The entire stack can be started with a single command: docker-compose up -d The first container registers itself with Consul, attempts to discover an existing primary, and if none is found it promotes itself to primary. During promotion it:

Initialises the database schema.

Creates a Consul session and acquires an atomic lock that stores the master password, ensuring exclusivity.

Starts a periodic XtraBackup job that writes a snapshot to a temporary location, then uploads the snapshot and the latest binlog to Manta.

Writes the snapshot and binlog URLs into Consul keys for replicas to consume.

Scaling replicas is straightforward: docker-compose scale mysql=3 Each replica’s onStart handler performs the following steps:

Query Consul for the current primary’s address.

Download the latest snapshot URL from Consul and restore it using innobackupex.

Fetch the most recent binlog from Manta and apply it.

Start replication using GTIDs to guarantee consistency.

Register a healthy status in Consul with a TTL‑based health check.

Self‑Monitoring

Containerbuddy runs a lightweight health probe inside each container every few seconds. The probe executes a simple SELECT 1 via the bundled MySQL client. If the query succeeds, Containerbuddy writes a TTL‑based health entry to Consul; a failure causes the health check to be marked unhealthy, triggering the onChange workflow.

Self‑Repair (Failover)

If the primary container stops, Containerbuddy removes its registration from Consul. All replicas receive an onChange event, wait for the Consul lock to be released, and then compete for the lock. The winner acquires the lock, promotes itself to primary, and writes a new master password to Consul. The remaining replicas automatically re‑configure to replicate from the new primary.

cloud-nativeDockerautomationMySQLConsulAutopilot PatternContainerbuddy
Art of Distributed System Architecture Design
Written by

Art of Distributed System Architecture Design

Introductions to large-scale distributed system architectures; insights and knowledge sharing on large-scale internet system architecture; front-end web architecture overviews; practical tips and experiences with PHP, JavaScript, Erlang, C/C++ and other languages in large-scale internet system development.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.