Databases 7 min read

Using DNS SRV and Consul for MySQL Router Failover and Load Balancing in InnoDB Cluster

This tutorial demonstrates how to replace the traditional VIP‑based MySQL Router deployment with DNS SRV and Consul service discovery, enabling automatic failover and load balancing for MySQL InnoDB Cluster using supported MySQL Connectors and a Python client.

Aikesheng Open Source Community
Aikesheng Open Source Community
Aikesheng Open Source Community
Using DNS SRV and Consul for MySQL Router Failover and Load Balancing in InnoDB Cluster

MySQL Router is the access entry point for an InnoDB Cluster; the official recommendation is to deploy the router on the same host as the application to avoid a single‑point‑of‑failure.

To avoid this coupling, DNS SRV can be used so the router does not need to be bound to the application, eliminating the need for a VIP or external load balancer and allowing integration with service‑mesh architectures via Consul or similar discovery tools.

DNS SRV records specify service host, port, priority and weight, enabling clients to select the lowest‑priority reachable address and, when priorities are equal, prefer higher weight.

MySQL Connector 8.0.19 (Connector/NET, ODBC, J, Node.js, Python, C++) implements DNS SRV according to RFC 2782, handling priority and weight for failover.

Demo setup using Consul for service discovery:

1. Deploy Consul agent on the same node as MySQL Router, register the router service.

2. Configure the application connector to query Consul’s DNS SRV service address.

3. Consul returns the router’s address and port, which the application then contacts.

Step‑by‑step commands (shown in pre blocks):

for i in `seq 4000 4002`; do
    echo "Deploy mysql sandbox $i"
    mysqlsh -- dba deploy-sandbox-instance $i --password=root
done

echo "Create innodb cluster..."
mysqlsh root@localhost:4000 -- dba create-cluster cluster01
mysqlsh root@localhost:4000 -- cluster add-instance --recoveryMethod=clone --password=root root@localhost:4001
mysqlsh root@localhost:4000 -- cluster add-instance --recoveryMethod=clone --password=root root@localhost:4002
for i in 6446 6556; do
    echo "Bootstrap router $i"
    mysqlrouter --bootstrap root@localhost:4000 --conf-use-gr-notifications -d router_$i --conf-base-port $i --name router_$i
    sed -i 's/level = INFO/level = DEBUG/g' router_$i/mysqlrouter.conf
    sh router_$i/stop.sh
    sh router_$i/start.sh
done
brew install consul
consul agent -dev &
consul services register -name router -id router1 -port 6446 -tag rw
consul services register -name router -id router2 -port 6556 -tag rw
dig router.service.consul SRV -p 8600
brew install dnsmasq
echo 'server=/consul/127.0.0.1#8600' > /usr/local/etc/dnsmasq.d/consul
sudo brew services restart dnsmasq
pip install mysql-connector-python
import mysql.connector
cnx = mysql.connector.connect(user='root', password='root', database='mysql_innodb_cluster_metadata', host='router.service.consul', dns_srv=True)
cursor = cnx.cursor()
cursor.execute("select instance_id from v2_this_instance")
for (instance_id) in cursor:
    print(f"instance id: {instance_id}")
cursor.close()
cnx.close()

Running the Python client shows that queries are distributed across both router instances, confirming successful load‑balanced failover via DNS SRV.

load balancingMySQLConsulRouterInnoDB ClusterDNS SRVPython Connector
Aikesheng Open Source Community
Written by

Aikesheng Open Source Community

The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.