Backend Development 10 min read

How to Auto-Update Nginx Upstreams with Zookeeper Using nginx-upstream-reloader

This guide explains how the nginx-upstream-reloader module automatically discovers backend IP changes from Zookeeper, persists upstream configurations, and updates Nginx in real time without reloads, covering architecture, features, workflow, and step‑by‑step usage.

Efficient Ops
Efficient Ops
Efficient Ops
How to Auto-Update Nginx Upstreams with Zookeeper Using nginx-upstream-reloader

1. Background

Many companies use dynamic scheduling systems (Mesos+Docker, Kubernetes, or custom solutions) where service instance IPs change frequently. Traditional deployments on static machines have stable IPs, allowing manual Nginx upstream configuration. In containerized environments, instance IPs change on each restart, making manual updates impractical. The nginx-upstream-reloader module addresses this by automatically syncing backend IP changes to Nginx.

2. Module Architecture

Some organizations use etcd/Consul with the nginx‑upsync‑module to achieve zero‑restart upstream updates. Our environment stores backend information in Zookeeper, not etcd/Consul. A previous Zookeeper‑nginx connector created a ZK connection per worker, causing a surge in connections. We instead use the dyups module together with custom code that pulls configuration from Zookeeper and updates the shared memory via dyups, also handling ZK failures.

The dyups module provides an HTTP interface that can POST or GET to update or retrieve an upstream list without reloading Nginx.

More details are available at https://github.com/yzprofile/ngx_http_dyups_module .

Because dyups only updates shared memory and cannot persist the configuration to a file, our module also persists the generated upstream files.

3. Module Features

Fetch backend lists registered in Zookeeper, format them, and write to Nginx configuration files for persistence.

Push the persisted upstream list into Nginx shared memory via dyups for dynamic updates.

When Zookeeper is unavailable, stop updating shared memory and configuration files, keeping the last known good state.

Support multiple Zookeeper clusters and nodes.

4. Workflow

5. Usage

Prerequisites: Nginx compiled with dyups, Python 2.6/2.7.

<code>cd /home/work
git clone http://v9.git.n.xiaomi.com/liuliqiu/nginx-upstream-reloader.git
cd nginx-upstream-reloader
bash install_venv.sh</code>

Edit

conf/upstream_zk_nodes.conf

to define Zookeeper servers and node paths. Example for a single ZK cluster:

<code>upstream_zk_nodes.conf
zk_servers: zk-hadoop-test01:11000,zk-hadoop-test02:11000
zk_nodes:
    bonus-api: /web_services/com.miui.bonus.api.resin-web</code>

When the module starts, it creates

bonus-api.upstream

under the directory configured by

files_output_path

(e.g.,

/home/work/nginx/site-enable

).

<code>upstream bonus-api {
    server ...;
    server ...;
}</code>

For multiple ZK clusters, configure accordingly (YAML style shown in the original article).

Include the generated upstream files in Nginx and use variables in

proxy_pass

:

<code>include /home/work/nginx/site-enable/ocean-helloword-upstream1.upstream;
include /home/work/nginx/site-enable/ocean-helloword-upstream2.upstream;
include /home/work/nginx/site-enable/ocean-helloword-upstream3.upstream;

server {
    listen 80;
    location /helloworld1 {
        set $ups1 ocean-helloword-upstream1;
        proxy_pass http://$ups1;
    }
    location /helloworld2 {
        set $ups2 ocean-helloword-upstream2;
        proxy_pass http://$ups2;
    }
    location /helloworld3 {
        set $ups3 ocean-helloword-upstream3;
        proxy_pass http://$ups3;
    }
}</code>

Add a dedicated server to expose the dyups interface:

<code>server{
    listen 127.0.0.1:14443;
    server_name _;
    location / {
        dyups_interface;
    }
}</code>

Start the reloader first, then Nginx:

<code>bash nginx-upstream-reloader/start.sh
/home/work/nginx/sbin/nginx</code>

6. Compatibility

The module has been tested on CentOS 6 and CentOS 7 and works on both containers and bare‑metal machines.

PythonZookeepernginxdynamic upstreamdyupsbackend discovery
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.