Operations 24 min read

Mastering Nginx Load Balancing: Choosing and Tuning Layer 4 vs Layer 7

This guide explains the differences between Layer 4 and Layer 7 load balancing in Nginx, shows how to select the appropriate mode for various scenarios, provides detailed configuration examples—including upstream settings, health checks, SSL handling, and performance tuning—and shares best‑practice tips to avoid common pitfalls.

MaGe Linux Operations
MaGe Linux Operations
MaGe Linux Operations
Mastering Nginx Load Balancing: Choosing and Tuning Layer 4 vs Layer 7

Overview

Operations engineers frequently use Nginx as a load balancer, but the default upstream round‑robin configuration is often insufficient for production workloads. Real‑world issues such as failed back‑ends, overloaded servers, or unstable WebSocket connections expose hidden pitfalls, especially when Layer 4 (TCP/UDP) timeouts are mis‑configured.

Layer 4 vs. Layer 7 Load Balancing

In the OSI model:

Layer 4 (Transport) : forwards traffic based solely on IP and port. Nginx implements this with the stream module; other options include LVS and cloud NLB/CLB services.

Layer 7 (Application) : parses HTTP/HTTPS, enabling routing by URL, headers, cookies, and SSL termination. Implemented via Nginx http module, HAProxy, or cloud ALB services.

When to Choose Each Layer

HTTP/HTTPS websites – Layer 7 for URL routing, header rewriting, and SSL offload.

TCP services (MySQL, Redis, custom protocols) – Layer 4 for raw forwarding without protocol parsing.

WebSocket – both work; Layer 7 can handle the HTTP upgrade, Layer 4 is simpler.

Extreme performance – Layer 4 because it avoids application‑level parsing.

Request‑level control (routing by URL, cookie, or header) – Layer 7 .

Environment Requirements

Nginx 1.9.0+ (the stream module requires ≥ 1.9.0).

Operating system: CentOS 7+ or Ubuntu 18.04+.

Compile with --with-stream when using Layer 4.

Verify stream support:

nginx -V 2>&1 | grep -- --with-stream

Configuration Details

Layer 7 (HTTP/HTTPS) Load Balancing

Typical reverse‑proxy configuration:

# /etc/nginx/conf.d/lb.conf
upstream backend {
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080;
}
server {
    listen 80;
    server_name www.example.com;
    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Load‑Balancing Algorithms (Layer 7)

Round‑robin (default)

upstream backend { server 192.168.1.10:8080; server 192.168.1.11:8080; }

Weighted round‑robin

upstream backend {
    server 192.168.1.10:8080 weight=5;
    server 192.168.1.11:8080 weight=3;
    server 192.168.1.12:8080 weight=2;
}

IP hash (session persistence)

upstream backend {
    ip_hash;
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
}

Least connections

upstream backend {
    least_conn;
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
}

Consistent hash (requires third‑party module or Nginx Plus)

upstream backend {
    hash $request_uri consistent;
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
}

Health Checks (Layer 7)

Open‑source Nginx provides only passive health checks. A backend is marked down after max_fails consecutive failures within fail_timeout seconds.

upstream backend {
    server 192.168.1.10:8080 max_fails=3 fail_timeout=30s;
    server 192.168.1.11:8080 max_fails=3 fail_timeout=30s;
}

For active health checks you need the nginx_upstream_check_module:

# Clone and build the module
git clone https://github.com/yaoweibin/nginx_upstream_check_module.git
cd nginx-1.24.0
patch -p1 < ../nginx_upstream_check_module/check_1.20.1+.patch
./configure --add-module=../nginx_upstream_check_module
make && make install

# Example active check configuration
upstream backend {
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    check interval=3000 rise=5 fall=2 timeout=1000 type=http;
    check_http_send "GET /health HTTP/1.0

";
    check_http_expect_alive http_2xx http_3xx;
}
location /upstream_status {
    check_status;
    allow 127.0.0.1;
    deny all;
}

Layer 4 (TCP/UDP) Load Balancing

Define a stream block parallel to the http block.

# /etc/nginx/nginx.conf
stream {
    upstream mysql_backend {
        server 192.168.1.20:3306;
        server 192.168.1.21:3306;
    }
    server {
        listen 3306;
        proxy_pass mysql_backend;
    }
}
http {
    # existing HTTP configuration …
}

Or split into separate files for better organization:

# /etc/nginx/stream.conf
stream {
    include /etc/nginx/stream.d/*.conf;
}
# /etc/nginx/stream.d/mysql.conf
upstream mysql_backend {
    server 192.168.1.20:3306 weight=5;
    server 192.168.1.21:3306 weight=3;
}
server {
    listen 3306;
    proxy_pass mysql_backend;
    proxy_connect_timeout 10s;
    proxy_timeout 3600s;        # long idle timeout for MySQL
    proxy_socket_keepalive on;
}

Layer 4 Timeout Tuning (Common Pitfall)

If proxy_timeout is left at the default 10 minutes, long‑lived MySQL connections may be closed, causing “MySQL server has gone away” errors. Extending the timeout resolves the issue:

stream {
    upstream mysql_backend { server 192.168.1.20:3306; }
    server {
        listen 3306;
        proxy_pass mysql_backend;
        proxy_connect_timeout 10s;
        proxy_timeout 3600s;   # 1 hour for idle connections
        proxy_socket_keepalive on;
    }
}

UDP Load Balancing (Layer 4)

stream {
    upstream dns_backend {
        server 8.8.8.8:53;
        server 8.8.4.4:53;
    }
    server {
        listen 53 udp;
        proxy_pass dns_backend;
        proxy_timeout 1s;
        proxy_responses 1;   # expect a single response packet
    }
}

Advanced Features (Both Layers)

Keepalive connection pool (Layer 7)

upstream backend {
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    keepalive 32;               # idle connections per worker
    keepalive_requests 1000;    # max requests per connection
    keepalive_timeout 60s;      # idle timeout
}
server {
    location / {
        proxy_pass http://backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";   # required for keepalive
    }
}

Backup servers

upstream backend {
    server 192.168.1.10:8080;
    server 192.168.1.11:8080;
    server 192.168.1.12:8080 backup;   # used only when primary servers fail
}

Slow start (Nginx Plus or OpenResty)

upstream backend {
    server 192.168.1.10:8080 slow_start=30s;
    server 192.168.1.11:8080 slow_start=30s;
}

Routing by URL to different upstreams

upstream api_backend { server 192.168.1.10:8080; server 192.168.1.11:8080; }
upstream static_backend { server 192.168.1.20:80; }
server {
    listen 80;
    location /api/ { proxy_pass http://api_backend; }
    location /static/ { proxy_pass http://static_backend; }
    location / { proxy_pass http://api_backend; }
}

Header/Cookie based routing

upstream v1 { server 192.168.1.10:8080; }
upstream v2 { server 192.168.1.20:8080; }
map $http_x_version $backend_name {
    default v1;
    "v2"   v2;
}
server {
    location / { proxy_pass http://$backend_name; }
}

SSL/TLS Configuration

SSL offload (Layer 7)

server {
    listen 443 ssl http2;
    server_name www.example.com;
    ssl_certificate /etc/nginx/ssl/example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/example.com.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers on;
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    location / {
        proxy_pass http://backend;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

SSL pass‑through (Layer 4)

stream {
    upstream https_backend {
        server 192.168.1.10:443;
        server 192.168.1.11:443;
    }
    server {
        listen 443;
        proxy_pass https_backend;
        ssl_preread on;   # keep traffic encrypted
    }
}

SNI‑based routing (Layer 4)

stream {
    map $ssl_preread_server_name $backend_name {
        www.example.com api_backend;
        api.example.com api_backend2;
        default          api_backend;
    }
    upstream api_backend { server 192.168.1.10:443; }
    upstream api_backend2 { server 192.168.1.20:443; }
    server {
        listen 443;
        proxy_pass $backend_name;
        ssl_preread on;
    }
}

Best Practices and Cautions

Performance Tuning

Worker processes : worker_processes auto; (usually equal to CPU cores). Increase worker_connections to at least 65535.

events {
    worker_connections 65535;
    use epoll;
    multi_accept on;
}

System parameters (sysctl)

# /etc/sysctl.conf
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
fs.file-max = 655350

Nginx HTTP optimizations

http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    keepalive_requests 1000;
    client_body_buffer_size 10k;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 8k;
    gzip on;
    gzip_min_length 1k;
    gzip_comp_level 4;
    gzip_types text/plain text/css application/json application/javascript;
}

High Availability

Run multiple Nginx instances with Keepalived for active‑passive failover:

# yum install -y keepalived
# /etc/keepalived/keepalived.conf (master)
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh";
    interval 2;
    weight -20;
}

vrrp_instance VI_1 {
    state MASTER;
    interface eth0;
    virtual_router_id 51;
    priority 100;
    advert_int 1;
    authentication {
        auth_type PASS;
        auth_pass 1111;
    }
    virtual_ipaddress { 192.168.1.100; }
    track_script { check_nginx; }
}

Monitoring

Enable the stub status page for basic metrics:

location /nginx_status {
    stub_status on;
    allow 127.0.0.1;
    allow 10.0.0.0/8;
    deny all;
}

For production monitoring, scrape nginx-prometheus-exporter with Prometheus.

Common Pitfalls

Always test configuration with nginx -t before reloading.

Upstream names must be unique across all included files.

Trailing slash in proxy_pass changes URI rewriting; be explicit.

Keepalive requires both proxy_http_version 1.1 and proxy_set_header Connection "".

Layer 4 connections are dropped if proxy_timeout is too short.

502 errors often stem from overly aggressive max_fails or slow back‑ends.

WebSocket disconnections are usually caused by short proxy_read_timeout values.

Missing X-Forwarded-For prevents back‑ends from seeing the real client IP.

Compatibility Notes

HTTP/2 works only between client and Nginx; upstream communication remains HTTP/1.1.

gRPC load balancing requires grpc_pass, not proxy_pass.

Unix sockets can be mixed with TCP upstreams, but file permissions must allow Nginx access.

Conclusion

Layer 7 is ideal for HTTP/HTTPS services, offering URL‑based routing, header manipulation, and SSL termination.

Layer 4 suits TCP/UDP services, provides higher throughput, and requires no protocol parsing.

Select algorithms based on statefulness: round‑robin for stateless traffic, ip_hash for session persistence, least_conn for long‑lived connections.

Enable keepalive pools to reuse connections; otherwise performance degrades.

Proper timeout settings, especially proxy_timeout for Layer 4, are essential to avoid unexpected disconnections.

Load BalancingPerformance TuningHealth CheckLayer 4Layer 7
MaGe Linux Operations
Written by

MaGe Linux Operations

Founded in 2009, MaGe Education is a top Chinese high‑end IT training brand. Its graduates earn 12K+ RMB salaries, and the school has trained tens of thousands of students. It offers high‑pay courses in Linux cloud operations, Python full‑stack, automation, data analysis, AI, and Go high‑concurrency architecture. Thanks to quality courses and a solid reputation, it has talent partnerships with numerous internet firms.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.