Nginx vs HAProxy: Enterprise Load Balancing from Zero to Production
This comprehensive guide compares Nginx and HAProxy in architecture, performance, configuration, high‑availability design, monitoring, tuning, and troubleshooting, providing step‑by‑step examples and a decision matrix to help engineers choose the right load‑balancing solution for enterprise workloads.
Architecture Comparison
Nginx
Event‑driven asynchronous architecture based on epoll/kqueue.
Low memory footprint (typically < 60 MB) and intuitive configuration.
Rich ecosystem of third‑party modules.
HAProxy
Specialised L4/L7 load balancer with a "do one thing well" philosophy.
Robust health‑check mechanisms and detailed statistics.
More structured configuration files.
Performance Tests
Test Environment
# Load balancer: 2 CPU, 4 GB RAM
# Backend servers: 4 × (1 CPU, 2 GB RAM)
# Network: 1 Gbps LAN
# Tools: wrk + ApacheBenchStatic File Scenario
wrk -t12 -c1000 -d30s --latency http://lb-server/static/index.htmlNginx: 85 000 req/s, avg latency 11.8 ms
HAProxy: 78 000 req/s, avg latency 12.8 ms
Dynamic API Scenario
curl -X POST http://lb-server/api/users \
-H "Content-Type: application/json" \
-d '{"username":"test","email":"[email protected]"}'Nginx: 45 000 req/s, avg latency 22.1 ms
HAProxy: 52 000 req/s, avg latency 19.2 ms
Memory Usage
Nginx: 45 – 60 MB
HAProxy: 25 – 35 MB
Practical Configuration
Nginx Load‑Balancing Example
# /etc/nginx/nginx.conf
upstream backend_servers {
server 192.168.1.10:8080 weight=3 max_fails=3 fail_timeout=30s;
server 192.168.1.11:8080 weight=2 max_fails=3 fail_timeout=30s;
server 192.168.1.12:8080 weight=1 backup;
keepalive 32;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
}Advanced Strategies
# IP‑hash session persistence
upstream backend_sticky {
ip_hash;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
}
# Least‑connections algorithm
upstream backend_least_conn {
least_conn;
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
}HAProxy Configuration
# /etc/haproxy/haproxy.cfg
global
daemon
user haproxy
group haproxy
maxconn 40000
nbproc 1
nbthread 4
log 127.0.0.1:514 local0
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
errorfile 400 /etc/haproxy/errors/400.http
# … other errorfile definitions …
frontend web_frontend
bind *:80
bind *:443 ssl crt /etc/ssl/certs/example.com.pem
acl is_api hdr(host) -i api.example.com
acl is_static hdr(host) -i static.example.com
acl is_websocket hdr(Connection) -i upgrade
use_backend api_backend if is_api
use_backend static_backend if is_static
use_backend websocket_backend if is_websocket
default_backend web_backend
backend api_backend
balance roundrobin
option httpchk GET /health
server api1 192.168.1.10:8080 check weight 100 maxconn 1000
server api2 192.168.1.11:8080 check weight 100 maxconn 1000
server api3 192.168.1.12:8080 check weight 50 maxconn 500 backup
backend web_backend
balance leastconn
cookie SERVERID insert indirect nocache
server web1 192.168.1.20:8080 check cookie web1
server web2 192.168.1.21:8080 check cookie web2
server web3 192.168.1.22:8080 check cookie web3
listen stats
bind *:8404
stats enable
stats uri /stats
stats refresh 10s
stats admin if TRUEHigh‑Availability Design
Keepalived + Nginx
# /etc/keepalived/keepalived.conf (master)
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2
weight -2
fall 3
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass nginx_ha
}
virtual_ipaddress {
192.168.1.100/24
}
track_script { chk_nginx }
notify_master "/etc/keepalived/notify_master.sh"
notify_backup "/etc/keepalived/notify_backup.sh"
}Health‑Check Script
#!/bin/bash
counter=0
while [ $counter -lt 3 ]; do
nginx_status=$(curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1/health)
if [ $nginx_status -eq 200 ]; then
exit 0
fi
counter=$((counter+1))
sleep 1
done
exit 1Active‑Active Docker Compose
version: '3.8'
services:
nginx-lb1:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
- lb_network
deploy:
replicas: 2
haproxy-lb1:
image: haproxy:2.4-alpine
ports:
- "8080:80"
- "8404:8404"
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
networks:
- lb_network
deploy:
replicas: 2
networks:
lb_network:
driver: overlayMonitoring & Operations
Nginx Status Page
# /etc/nginx/conf.d/status.conf
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 192.168.1.0/24;
deny all;
}
log_format detailed '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $upstream_addr $upstream_response_time $request_time';
access_log /var/log/nginx/detailed.log detailed;HAProxy Health & Alert Script
#!/bin/bash
HAPROXY_STATS_URL="http://127.0.0.1:8404/stats;csv"
# Check backend health
unhealthy=$(curl -s "$HAPROXY_STATS_URL" | grep -E "(DOWN|MAINT)" | wc -l)
if [ $unhealthy -gt 0 ]; then
echo "WARNING: $unhealthy backend servers are down"
/usr/local/bin/send_alert.sh "HAProxy Backend Health Check Failed"
fi
# Check connection count
connections=$(curl -s "$HAPROXY_STATS_URL" | awk -F',' '{sum += $5} END {print sum}')
if [ $connections -gt 10000 ]; then
echo "WARNING: High connection count: $connections"
fiPrometheus Exporters
scrape_configs:
- job_name: 'nginx'
static_configs:
- targets: ['nginx-exporter:9113']
scrape_interval: 15s
- job_name: 'haproxy'
static_configs:
- targets: ['haproxy-exporter:8404']
scrape_interval: 15s
metrics_path: '/stats/prometheus'Performance Tuning
Nginx
# /etc/nginx/nginx.conf (optimised)
worker_processes auto;
worker_rlimit_nofile 65535;
events {
use epoll;
worker_connections 65535;
multi_accept on;
}
http {
open_file_cache max=10000 inactive=60s;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30;
keepalive_requests 1000;
gzip on;
gzip_vary on;
gzip_comp_level 6;
gzip_types text/plain text/css application/json application/javascript;
}HAProxy
# /etc/haproxy/haproxy.cfg (optimised)
global
maxconn 100000
spread-checks 5
tune.maxaccept 100
tune.bufsize 32768
tune.rcvbuf.server 262144
tune.sndbuf.server 262144
nbproc 4
cpu-map 1 0
cpu-map 2 1
cpu-map 3 2
cpu-map 4 3
defaults
timeout connect 3s
timeout client 30s
timeout server 30s
timeout http-keep-alive 10s
timeout check 5s
option http-server-close
option forwardfor
option redispatch
retries 3Troubleshooting
Common Nginx Issues
502 Bad Gateway – verify backend health, error logs, firewall rules, and upstream configuration.
High latency – increase upstream keepalive connections, analyse access logs for slow requests.
Common HAProxy Issues
Health‑check failures – check httpchk settings and server definitions.
Session persistence problems – ensure cookie configuration matches backend expectations.
Decision Guidance
Choose Nginx when a familiar web server with built‑in load balancing is needed, the project is small‑to‑medium, and rapid deployment is a priority.
Choose HAProxy for high‑traffic, mission‑critical services that require advanced algorithms, fine‑grained health checks, and maximum availability.
Future Trends
Service mesh (e.g., Istio) and edge‑computing workloads are reshaping traffic management, but traditional L4/L7 load balancers remain relevant at the network edge. Upcoming features such as HTTP/3 support and WebAssembly extensions will extend the flexibility of both Nginx and HAProxy.
Conclusion
Both Nginx and HAProxy are mature, battle‑tested solutions. Selecting the right one depends on workload characteristics, team expertise, and operational requirements. Proper testing, monitoring, and high‑availability design are essential regardless of the chosen tool.
Raymond Ops
Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
