Master Nginx: Essential Configurations for Backend Engineers
This guide walks backend engineers through essential Nginx configurations—including reverse proxy, load balancing, static file handling, rate limiting, and HTTPS—providing code examples and deployment steps to ensure stable, secure, and high‑performance services while keeping servers resilient during traffic spikes.
Today we skip NullPointerException and talk about essential Nginx configuration skills for backend engineers.
Without proper deployment, interfaces can be vulnerable; Nginx serves as a Swiss‑army knife for reverse proxy, load balancing, rate limiting, static file handling, and HTTPS.
1. What is Nginx?
Nginx is a high‑performance reverse proxy server that forwards external requests to backend services, handles thousands of concurrent connections, serves static files, compresses data, and provides basic security.
External requests first pass through Nginx before being proxied to backend services.
It can handle tens of thousands of concurrent connections, far surpassing single‑threaded servers.
It can serve static files, compress data, and act as a security guard.
Example: hide real backend IP behind Nginx to protect services.
2. Scenario 1 – Reverse Proxy & Load Balancing
Scenario: uneven load across multiple backend servers during a promotion.
Goal: Distribute requests evenly to three backend servers, hide real IPs, and automatically remove failed nodes.
# Global configuration
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
upstream backend_servers {
server 192.168.1.10:8080;
server 192.168.1.11:8080;
server 192.168.1.12:8080;
least_conn;
keepalive 32;
proxy_next_upstream error timeout http_500;
}
server {
listen 80;
server_name www.yourdomain.com;
location /api/ {
proxy_pass http://backend_servers/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 30s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
}
}Key points :
Backend IPs are hidden behind Nginx’s public IP.
Traffic is evenly distributed across three servers during high traffic.
3. Scenario 2 – Static Resource Handling & Separation
Scenario: front‑end complains about slow image/JS loading.
Goal: Let Nginx serve static files directly to reduce backend load.
server {
listen 80;
server_name www.yourdomain.com;
# Static files
location /static/ {
root /data/;
autoindex off;
expires 30d;
gzip on;
gzip_types text/css application/javascript image/png;
}
location /images/ {
root /data/;
valid_referers none blocked www.yourdomain.com;
if ($invalid_referer) { return 403; }
}
# Dynamic requests still proxied
location /api/ {
proxy_pass http://backend_servers/;
}
}Key points :
Static files served by Nginx are over ten times faster than backend processing.
Browser caching and compression make subsequent loads near‑instant.
4. Scenario 3 – Rate Limiting & IP Black‑White List
Scenario: malicious IPs flood the login API.
Goal: Limit concurrent connections and request rate per IP, and block unwanted IPs.
# Rate‑limit configuration (inside http block)
http {
limit_conn_zone $binary_remote_addr zone=ip_conn:10m;
limit_req_zone $binary_remote_addr zone=ip_req:10m rate=5r/s;
set $allow_ip "192.168.1.0/24";
deny 10.0.0.1;
}
server {
listen 80;
server_name www.yourdomain.com;
location /api/login {
limit_conn ip_conn 10;
limit_req zone=ip_req burst=10 nodelay;
if ($remote_addr !~* $allow_ip) { return 403; }
proxy_pass http://backend_servers/;
}
}Key points :
Malicious IPs are blocked with a 403 response, keeping logs clean.
Login endpoint is protected from CC attacks.
5. Scenario 4 – HTTPS Configuration
Scenario: users see “not secure” warnings.
Goal: Enable HTTPS so data is encrypted and browsers show the green lock.
server {
listen 443 ssl;
server_name www.yourdomain.com;
ssl_certificate /etc/nginx/ssl/yourdomain.crt;
ssl_certificate_key /etc/nginx/ssl/yourdomain.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers on;
rewrite ^(.*)$ https://$host$1 permanent;
location / {
proxy_pass http://backend_servers/;
}
}Key points :
The green lock satisfies product managers.
Encrypted transmission protects user passwords.
6. How to Get Nginx Running
1. Install Nginx :
Linux: yum install nginx or apt-get install nginx Windows: download, unzip, run nginx.exe 2. Start / Restart :
sudo systemctl start nginx # start
sudo systemctl restart nginx # restart after changes3. Test Configuration :
nginx -t # check for errors before launchingRemember that Nginx configuration is iterative; adjust limits and blacklists based on traffic and feedback.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Selected Java Interview Questions
A professional Java tech channel sharing common knowledge to help developers fill gaps. Follow us!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
