Operations 7 min read

Master Nginx Load Balancing: Simple Configs and Strategies Explained

Learn how to set up basic Nginx load balancing with a minimal nginx.conf, create multiple Node.js services, and explore common strategies like round-robin, weighted round-robin, and IP hash, complete with code examples and testing results to ensure even request distribution.

WeDoctor Frontend Technology
WeDoctor Frontend Technology
WeDoctor Frontend Technology
Master Nginx Load Balancing: Simple Configs and Strategies Explained

Introduction

This article demonstrates a minimal

nginx.conf

configuration that proxies requests to a simple Node.js service, allowing readers to grasp the basics of Nginx load balancing.

<code>http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    server {
        listen       8081;
        server_name  localhost;
        location / {
            proxy_pass      http://0.0.0.0:9000;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
</code>

And a minimal Node.js HTTP server:

<code>const http = require('http');
const server = http.createServer();
const host = '0.0.0.0';
const port = 9000;
let n = 0;
server.on('request', function (req, res) {
  n += 1;
  console.log('请求来了: ', n);
  res.write('Hello World!!!');
  res.end();
});
server.listen(port, host, function () {
  console.log(`服务器启动了,请访问:http://${host}:${port}`);
});
</code>

What Is Load Balancing?

Load balancing distributes incoming requests across multiple servers, similar to assigning a heavy load of bricks to several workers so none are overloaded. Nginx can balance traffic based on server capacity and configurable strategies, rather than a strict equal split.

Load Balancing Strategies

Round‑Robin (Default)

Using multiple identical Node services on different ports (9000, 9001, 9002), the

upstream

block distributes requests evenly.

<code>http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    upstream test {
        server 0.0.0.0:9000;
        server 0.0.0.0:9001;
        server 0.0.0.0:9002;
    }
    server {
        listen       8081;
        server_name  localhost;
        location / {
            proxy_pass      http://test;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
</code>

Testing with 600 requests shows each service handling roughly 200 requests.

Weighted Round‑Robin

Assigns different weights to servers, e.g., 50% to port 9000 and 25% to each of the others.

<code>http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    upstream test {
        server 0.0.0.0:9000 weight=2;
        server 0.0.0.0:9001 weight=1;
        server 0.0.0.0:9002 weight=1;
    }
    server {
        listen       8081;
        server_name  localhost;
        location / {
            proxy_pass      http://test;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
</code>

Tests confirm the expected 50/25/25 request split.

IP‑Hash

Routes requests based on the client IP, ensuring the same client consistently reaches the same backend.

<code>http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    upstream test {
        ip_hash;
        server 0.0.0.0:9000;
        server 0.0.0.0:9001;
        server 0.0.0.0:9002;
    }
    server {
        listen       8081;
        server_name  localhost;
        location / {
            proxy_pass      http://test;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}
</code>

Because only one client machine was used, all 100 test requests were routed to the same backend, confirming the IP‑hash behavior.

Conclusion

In just a few minutes you can understand and configure Nginx load balancing, choosing the strategy that best fits your environment, and avoid being intimidated by complex setups.

backendoperationsLoad Balancingnode.jsNginxround robinip hash
WeDoctor Frontend Technology
Written by

WeDoctor Frontend Technology

Official WeDoctor Group frontend public account, sharing original tech articles, events, job postings, and occasional daily updates from our tech team.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.