Master Nginx Reverse Proxy on Ubuntu 24.04 & Rocky Linux 9.4 – From Installation to Monitoring

This comprehensive guide walks you through installing Nginx 1.27 on Ubuntu 24.04 LTS and Rocky Linux 9.4, configuring reverse proxy, load balancing, SSL/TLS, WebSocket and gRPC support, tuning kernel and Nginx parameters, setting up health checks, high‑availability with Keepalived, and monitoring with Prometheus and Grafana, all with ready‑to‑use code snippets and scripts.

Ops Community
Ops Community
Ops Community
Master Nginx Reverse Proxy on Ubuntu 24.04 & Rocky Linux 9.4 – From Installation to Monitoring

Overview

In a micro‑service environment each service typically exposes its own port (e.g. 192.168.1.50:8080, 192.168.1.50:8081). Forgetting the correct address leads to mis‑routing and production incidents. A single Nginx reverse‑proxy instance can expose only 80/443 and route requests to backend services based on server_name and location paths, providing a unified entry point.

Technical Highlights

Nginx 1.27.x with HTTP/3 (QUIC), dynamic upstream resolution and built‑in health checks.

Tested on Ubuntu 24.04 LTS and Rocky Linux 9.4 .

Supports HTTP/HTTPS, WebSocket, gRPC, TCP/UDP proxying, SSL termination, load‑balancing, rate limiting and Prometheus metrics.

Environment Requirements

# Operating System
Ubuntu 24.04 LTS / Rocky Linux 9.4

# Nginx version
1.27.3 (2025 stable, HTTP/3 ready)

# OpenSSL
3.2+ (TLS 1.3 & QUIC support)

# systemd
256+ (service management)

# CPU
2 cores+ (proxy workload is not CPU‑bound)

# Memory
2 GB+ (each worker uses ~20‑50 MB)

# Disk
20 GB+ (logs should be on a separate partition)

Installation Options

Package manager (quick start)

Ubuntu 24.04

# Import the official Nginx signing key
curl -fsSL https://nginx.org/keys/nginx_signing.key \
  | sudo gpg --dearmor -o /usr/share/keyrings/nginx-archive-keyring.gpg

# Add the mainline repository
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/mainline/ubuntu $(lsb_release -cs) nginx" \
  | sudo tee /etc/apt/sources.list.d/nginx.list

# Prefer the Nginx packages over the distro ones
echo -e "Package: *
Pin: origin nginx.org
Pin-Priority: 900" \
  | sudo tee /etc/apt/preferences.d/99nginx

sudo apt update && sudo apt install -y nginx
nginx -v

Rocky Linux 9.4

# Add the mainline repository
sudo tee /etc/yum.repos.d/nginx.repo > /dev/null <<'EOF'
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
module_hotfixes=true
gpgcheck=1
gpgkey=https://nginx.org/keys/nginx_signing.key
EOF

sudo dnf install -y nginx
nginx -v

Compile from source (required for HTTP/3)

# Create a build directory
mkdir -p /usr/local/src/nginx && cd /usr/local/src/nginx

# Download the source tarball
wget https://nginx.org/download/nginx-1.27.3.tar.gz
 tar xzf nginx-1.27.3.tar.gz && cd nginx-1.27.3

# Create a dedicated nginx system user
sudo useradd -r -s /sbin/nologin -d /var/cache/nginx nginx

# Configure with the required modules (including HTTP/3)
./configure \
  --prefix=/etc/nginx \
  --sbin-path=/usr/sbin/nginx \
  --modules-path=/usr/lib64/nginx/modules \
  --conf-path=/etc/nginx/nginx.conf \
  --error-log-path=/var/log/nginx/error.log \
  --http-log-path=/var/log/nginx/access.log \
  --pid-path=/var/run/nginx.pid \
  --lock-path=/var/run/nginx.lock \
  --user=nginx \
  --group=nginx \
  --with-http_ssl_module \
  --with-http_v2_module \
  --with-http_v3_module \
  --with-http_realip_module \
  --with-http_gzip_static_module \
  --with-http_auth_request_module \
  --with-http_stub_status_module \
  --with-stream \
  --with-stream_ssl_module \
  --with-stream_realip_module \
  --with-threads \
  --with-file-aio \
  --with-pcre-jit \
  --with-http_gunzip_module

# Build and install (adjust -j to the number of CPU cores)
make -j$(nproc)
sudo make install

# Verify the build
nginx -V

System Configuration

Kernel parameter tuning

# Backup existing sysctl configuration
sudo cp /etc/sysctl.conf /etc/sysctl.conf.bak.$(date +%Y%m%d)

# Optimised parameters for a reverse‑proxy workload
cat > /etc/sysctl.d/99-nginx-proxy.conf <<'EOF'
net.core.somaxconn = 65535               # connection queue length
net.core.netdev_max_backlog = 65535      # NIC receive queue length
net.ipv4.tcp_tw_reuse = 1               # reuse TIME_WAIT sockets
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5
fs.file-max = 1048576
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 65535
EOF

# Apply the new settings
sudo sysctl --system

File‑descriptor limits

# Global limits for the nginx user and for root (required for many concurrent connections)
cat > /etc/security/limits.d/nginx.conf <<'EOF'
nginx soft nofile 1048576
nginx hard nofile 1048576
root soft nofile 1048576
root hard nofile 1048576
EOF

Dependencies (Ubuntu example)

sudo apt update && sudo apt install -y \
  build-essential libpcre2-dev zlib1g-dev libssl-dev \
  libgeoip-dev libgd-dev libxslt1-dev libperl-dev curl wget gnupg2 ca-certificates lsb-release

systemd Service File (required for compiled installs)

cat > /etc/systemd/system/nginx.service <<'EOF'
[Unit]
Description=nginx - high performance web server
Documentation=https://nginx.org/en/docs/
After=network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/var/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t -q
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
LimitNOFILE=1048576

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable --now nginx

Core Nginx Configuration (nginx.conf)

# /etc/nginx/nginx.conf (Nginx 1.27.x)
user nginx;
worker_processes auto;
worker_rlimit_nofile 65535;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 10240;
    use epoll;
    multi_accept on;
    accept_mutex off;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Log formats (plain and JSON for ELK/Loki)
    log_format main '$remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" rt=$request_time urt=$upstream_response_time uaddr=$upstream_addr us=$upstream_status';
    log_format json_combined escape=json '{"time":"$time_iso8601","remote_addr":"$remote_addr","request_method":"$request_method","request_uri":"$request_uri","status":$status,"body_bytes_sent":$body_bytes_sent,"request_time":$request_time,"upstream_response_time":"$upstream_response_time","upstream_addr":"$upstream_addr","upstream_status":"$upstream_status","http_referer":"$http_referer","http_user_agent":"$http_user_agent"}';
    access_log /var/log/nginx/access.log main;

    # Basic performance tweaks
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    server_tokens off;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 4;
    gzip_min_length 1024;
    gzip_types text/plain text/css text/xml text/javascript application/json application/javascript application/xml application/rss+xml application/atom+xml image/svg+xml font/woff2;

    # Request size limits
    client_max_body_size 50m;
    client_header_buffer_size 4k;
    large_client_header_buffers 4 32k;
    client_body_buffer_size 128k;

    # Proxy buffers (adjust for large responses)
    proxy_buffer_size 16k;
    proxy_buffers 4 64k;
    proxy_busy_buffers_size 128k;
    proxy_temp_file_write_size 128k;
    proxy_connect_timeout 10s;
    proxy_send_timeout 60s;
    proxy_read_timeout 60s;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;

    # WebSocket support (maps Upgrade header)
    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }

    # Rate limiting (example: 30 requests per second per IP)
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=30r/s;
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

    # Include virtual‑host definitions
    include /etc/nginx/conf.d/*.conf;
}

Reverse‑Proxy Server Blocks

API gateway (api.example.com)

# /etc/nginx/conf.d/api.conf
upstream api_backend {
    least_conn;
    server 10.0.1.10:8080 weight=5 max_fails=3 fail_timeout=30s;
    server 10.0.1.11:8080 weight=3 max_fails=3 fail_timeout=30s;
    server 10.0.1.12:8080 weight=2 max_fails=3 fail_timeout=30s;
    keepalive 64;
    keepalive_timeout 60s;
    keepalive_requests 1000;
}

server {
    listen 80;
    server_name api.example.com;
    location /.well-known/acme-challenge/ { root /var/www/certbot; }
    location / { return 301 https://$host$request_uri; }
}

server {
    listen 443 ssl http2;
    server_name api.example.com;
    ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
    include /etc/nginx/snippets/ssl-params.conf;
    access_log /var/log/nginx/api_access.log json_combined;
    error_log /var/log/nginx/api_error.log warn;
    limit_req zone=api_limit burst=50 nodelay;
    limit_conn conn_limit 100;
    location /api/ {
        proxy_pass http://api_backend/;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_next_upstream error timeout http_502 http_503 http_504;
        proxy_next_upstream_tries 3;
        proxy_next_upstream_timeout 15s;
    }
    location /health { proxy_pass http://api_backend/health; access_log off; }
    location /swagger/ {
        allow 10.0.0.0/8;
        allow 172.16.0.0/12;
        allow 192.168.0.0/16;
        deny all;
        proxy_pass http://api_backend/swagger/;
    }
    location / { return 404 '{"error": "Not Found"}'; add_header Content-Type application/json; }
}

Static front‑end site (www.example.com)

# /etc/nginx/conf.d/web.conf
server {
    listen 80;
    server_name www.example.com example.com;
    location /.well-known/acme-challenge/ { root /var/www/certbot; }
    location / { return 301 https://www.example.com$request_uri; }
}

server {
    listen 443 ssl http2;
    server_name www.example.com example.com;
    if ($host = example.com) { return 301 https://www.example.com$request_uri; }
    ssl_certificate /etc/letsencrypt/live/www.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/www.example.com/privkey.pem;
    include /etc/nginx/snippets/ssl-params.conf;
    access_log /var/log/nginx/web_access.log json_combined;
    error_log /var/log/nginx/web_error.log warn;
    root /var/www/frontend/dist;
    index index.html;
    location / { try_files $uri $uri/ /index.html; }
    location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff2|woff|ttf|svg|webp)$ {
        expires 30d;
        add_header Cache-Control "public, immutable";
        access_log off;
    }
    location /api/ {
        proxy_pass http://api_backend/;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

WebSocket service (ws.example.com)

# /etc/nginx/conf.d/websocket.conf
upstream ws_backend {
    ip_hash;
    server 10.0.1.20:3000 max_fails=3 fail_timeout=30s;
    server 10.0.1.21:3000 max_fails=3 fail_timeout=30s;
}

server {
    listen 80;
    server_name ws.example.com;
    location /.well-known/acme-challenge/ { root /var/www/certbot; }
    location / { return 301 https://$host$request_uri; }
}

server {
    listen 443 ssl http2;
    server_name ws.example.com;
    ssl_certificate /etc/letsencrypt/live/ws.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/ws.example.com/privkey.pem;
    include /etc/nginx/snippets/ssl-params.conf;
    access_log /var/log/nginx/ws_access.log json_combined;
    error_log /var/log/nginx/ws_error.log warn;
    location /ws {
        proxy_pass http://ws_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_read_timeout 3600s;
        proxy_send_timeout 3600s;
    }
    location / { # fallback for HTTP API on the same host
        proxy_pass http://ws_backend;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Connection "";
    }
}

gRPC service (grpc.example.com)

# /etc/nginx/conf.d/grpc.conf
upstream grpc_backend {
    server 10.0.1.30:5000;
    server 10.0.1.31:5000;
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name grpc.example.com;
    ssl_certificate /etc/letsencrypt/live/grpc.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/grpc.example.com/privkey.pem;
    include /etc/nginx/snippets/ssl-params.conf;
    location / {
        grpc_pass grpc://grpc_backend;
        grpc_connect_timeout 10s;
        grpc_send_timeout 60s;
        grpc_read_timeout 60s;
        grpc_next_upstream error timeout;
        grpc_next_upstream_tries 3;
    }
    location /grpc.health.v1.Health { grpc_pass grpc://grpc_backend; }
}

Performance Tuning

worker_processes : auto (one per CPU core).

worker_connections : increase to 10240 (or higher) to support many concurrent connections.

worker_rlimit_nofile : 65535 to raise the file‑descriptor limit.

keepalive_timeout : 65 s for client connections; upstream keepalive 32‑128 reduces TCP handshakes.

proxy_buffer_size / proxy_buffers : 16k and 4 64k for typical API traffic; increase for large payloads.

Enable sendfile, tcp_nopush and tcp_nodelay for efficient I/O.

Set gzip_comp_level to 4 for a good compression‑speed trade‑off.

Security Hardening

Hide Nginx version: server_tokens off; Enforce HTTPS with HSTS:

add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;

Set security headers:

add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "0" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;

Limit HTTP methods to safe verbs:

if ($request_method !~ ^(GET|HEAD|POST|PUT|PATCH|DELETE|OPTIONS)$) { return 405; }

Block access to hidden files and backup artefacts:

location ~ /\. { deny all; access_log off; log_not_found off; }
location ~* \.(bak|config|sql|psd|ini|log|sh|swp|dist)$ { deny all; access_log off; log_not_found off; }

High Availability with Keepalived

Deploy two Nginx nodes with a virtual IP (VIP) managed by Keepalived. The master runs Nginx and a health‑check script; if Nginx fails the master lowers its priority and the VIP moves to the backup.

# Example Keepalived configuration (master)
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication { auth_type PASS; auth_pass nginx_ha_2026; }
    virtual_ipaddress { 10.0.1.100/24 dev eth0; }
    track_script { chk_nginx }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault  "/etc/keepalived/notify.sh fault"
}

# Health‑check script (chk_nginx)
#!/bin/bash
if ! pgrep -x nginx > /dev/null; then
    systemctl start nginx && sleep 2
    pgrep -x nginx > /dev/null || exit 1
fi
status=$(curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1/)
[[ "$status" -ge 500 ]] && exit 1
exit 0

Monitoring

stub_status : expose internal metrics on 127.0.0.1:8888/nginx_status for active connections, requests, reading/writing/waiting.

Prometheus exporter : install nginx‑prometheus‑exporter listening on :9113 and scrape http://127.0.0.1:8888/nginx_status.

Grafana dashboard ID 12708 ("Nginx by nginxinc") provides ready‑made panels for active connections, request rate, error rates and upstream health.

Sample alert rules (Prometheus):

# Nginx process down
alert: NginxDown
expr: nginx_up == 0
for: 1m
labels: severity="critical"
annotations:
  summary: "Nginx process not running on {{ $labels.instance }}"

# Too many active connections
alert: NginxHighConnections
expr: nginx_connections_active > 5000
for: 5m
labels: severity="warning"
annotations:
  summary: "High active connections on {{ $labels.instance }}"

Troubleshooting

502 Bad Gateway : check /var/log/nginx/error.log for upstream connection errors, verify backend service is listening and reachable.

504 Gateway Timeout : increase proxy_connect_timeout, proxy_read_timeout or optimise backend latency.

413 Request Entity Too Large : raise client_max_body_size in the appropriate context (http/server/location).

WebSocket disconnects : default proxy_read_timeout is 60 s; set a larger value (e.g. 3600s) for long‑lived connections.

No live upstreams : all backends marked failed; verify health‑check parameters ( max_fails, fail_timeout) and backend availability.

Best Practices

Use worker_processes auto and worker_connections 10240 as a baseline; adjust based on expected concurrency.

Enable keepalive in upstream blocks (32‑128 connections) to reuse TCP connections to backends.

Prefer least_conn load‑balancing for services with variable request latency.

Separate static assets from API traffic; serve static files directly from Nginx.

Terminate TLS at the edge; keep backend services HTTP‑only to simplify certificate management.

Apply strict security headers and disable server_tokens to reduce information leakage.

Implement rate limiting per IP and per URI to protect against abuse.

Deploy Keepalived or a cloud load balancer for HA; ensure health‑check scripts accurately reflect service health.

Collect metrics via stub_status and Prometheus; set alerts for process health, connection spikes and request latency.

Log in JSON format for easy ingestion into ELK/Loki; rotate logs with logrotate to avoid disk exhaustion.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Monitoringhigh availabilityLoad BalancingPerformance TuningNginxReverse ProxySSL
Ops Community
Written by

Ops Community

A leading IT operations community where professionals share and grow together.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.