Backend Development 10 min read

Auto‑Updating Nginx Upstreams with Zookeeper Using nginx‑upstream‑reloader

This guide explains how to use the nginx‑upstream‑reloader module together with Zookeeper and the dyups module to automatically discover backend services, persist upstream configurations, and update Nginx without reloads, even when IP addresses change frequently.

Efficient Ops
Efficient Ops
Efficient Ops
Auto‑Updating Nginx Upstreams with Zookeeper Using nginx‑upstream‑reloader

1. Background

Many companies run dynamic scheduling systems (Mesos+Docker, Kubernetes, or custom solutions) where service instance IPs change frequently. Traditional Nginx configurations that hard‑code IPs become impractical, so a mechanism is needed to automatically refresh backend IPs in Nginx and apply the changes without a full reload.

2. Module Architecture

Earlier approaches used etcd/Consul with the nginx‑upsync‑module. In our environment backend information is stored in Zookeeper. Directly letting each Nginx worker connect to Zookeeper creates a large number of connections, so we built a module that pulls backend data from Zookeeper, updates the dyups shared‑memory interface, and also writes the upstream definition to a file for persistence.

3. Module Features

Fetch the list of backends registered in Zookeeper, format it, and save it to an Nginx upstream configuration file.

Write the persisted list into dyups' shared memory to update the upstream dynamically.

If Zookeeper becomes unavailable, stop updating both shared memory and the local file, keeping the last known good configuration.

Support reading from multiple Zookeeper clusters and multiple nodes.

4. Workflow Diagram

5. Usage

Prerequisites: Nginx compiled with the dyups module, Python 2.6/2.7.

1. Clone the module code to the Nginx host

<code>cd /home/work
git clone http://v9.git.n.xiaomi.com/liuliqiu/nginx-upstream-reloader.git</code>

2. Install the Python virtual environment and dependencies

<code>cd nginx-upstream-reloader
bash install_venv.sh</code>

3. Edit the configuration file upstream_zk_nodes.conf

Case 1 – Multiple services registered in a single Zookeeper cluster:

<code>upstream_zk_nodes.conf
zk_servers: zk-hadoop-test01:11000,zk-hadoop-test02:11000
zk_nodes:
    bonus-api: /web_services/com.miui.bonus.api.resin-web</code>

Case 2 – Services spread across different Zookeeper clusters:

<code>upstream_zk_nodes.conf
- zk_servers: tjwqstaging.zk.hadoop.srv:11000
  zk_nodes:
    ocean-helloworld-upstream1: /ocean/services/job.ocean-helloworld-nginx-upstream_service.ocean-helloworld-nginx-upstream_cluster.staging_pdl.oceantest_owt.inf_cop.xiaomi
    ocean-helloworld-upstream2: /ocean/services/job.ocean-helloworld-nginx-upstream_service.ocean-helloworld-nginx-upstream_cluster.staging_pdl.oceantest_owt.inf_cop.xiaomi

- zk_servers: tjwqstaging.zk.hadoop.srv:11000
  zk_nodes:
    ocean-helloworld-upstream3: /ocean/services/job.ocean-helloworld-nginx-upstream_service.ocean-helloworld-nginx-upstream_cluster.staging_pdl.oceantest_owt.inf_cop.xiaomi</code>

After starting the module, it generates files such as

bonus-api.upstream

or

ocean-helloworld-upstream1.upstream

under the directory configured by

files_output_path

(e.g.,

/home/work/nginx/site-enable

).

Example generated upstream file:

<code>upstream bonus-api {
    server ...;
    server ...;
}</code>

4. Include the generated upstream files in Nginx configuration

<code>include /home/work/nginx/site-enable/ocean-helloworld-upstream1.upstream;
include /home/work/nginx/site-enable/ocean-helloworld-upstream2.upstream;
include /home/work/nginx/site-enable/ocean-helloworld-upstream3.upstream;

server {
    listen 80;
    location /helloworld1 {
        set $ups1 ocean-helloworld-upstream1;
        proxy_pass http://$ups1;
    }
    location /helloworld2 {
        set $ups2 ocean-helloworld-upstream2;
        proxy_pass http://$ups2;
    }
    location /helloworld3 {
        set $ups3 ocean-helloworld-upstream3;
        proxy_pass http://$ups3;
    }
}</code>

5. Enable the dyups interface

<code>server {
    listen 127.0.0.1:14443;
    server_name _;
    location / {
        dyups_interface;
    }
}</code>

6. Start the reloader and Nginx

<code>bash nginx-upstream-reloader/start.sh
/home/work/nginx/sbin/nginx</code>

6. Compatibility

The module has been tested on CentOS 6 and CentOS 7 and works on both containers and physical machines.

PythonZookeepernginxdynamic upstreamdyupsbackend discovery
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.