Backend Development 6 min read

Common Nginx Functions: Static Proxy, Load Balancing, Rate Limiting, Caching, and Access Control

This article introduces Nginx’s key capabilities—including static file serving, various load‑balancing strategies, leaky‑bucket rate limiting, browser and proxy caching, and black‑/white‑list access control—explaining how each feature can be configured and applied in high‑concurrency web environments.

Architect's Tech Stack
Architect's Tech Stack
Architect's Tech Stack
Common Nginx Functions: Static Proxy, Load Balancing, Rate Limiting, Caching, and Access Control

Nginx is currently the most popular web and reverse‑proxy server, renowned for its high performance, especially under high concurrency, often outperforming Apache. Beyond load balancing, it offers a range of useful functions.

1. Static Proxy

Nginx excels at serving static files, making it an excellent image and file server; placing all static resources on Nginx enables separation of dynamic and static content for better performance.

2. Load Balancing

Through reverse proxy, Nginx can distribute requests to multiple backend servers, avoiding single‑node failures. Common load‑balancing strategies include:

1. Round Robin

Requests are assigned to backends in order, treating each server equally regardless of its current load.

2. Weighted Round Robin

Servers with higher capacity receive higher weight, handling more requests, while lower‑capacity servers get lower weight, balancing load according to server capabilities.

3. IP Hash (source‑address hash)

The client’s IP address is hashed to select a backend; the same IP consistently maps to the same server as long as the backend list remains unchanged.

4. Random

A random algorithm selects a backend based on the size of the server list.

5. Least Connections

Requests are sent to the server with the fewest active connections, improving overall utilization.

3. Rate Limiting

Nginx’s rate‑limiting module implements a leaky‑bucket algorithm, which is very useful in high‑concurrency scenarios.

1. Configuration Parameters

limit_req_zone is defined in the http block; $binary_remote_addr stores the client IP in binary form. The zone defines a shared memory area for IP state and request rate (e.g., 16 000 IPs ≈ 1 MB, enough for 160 000 IPs). rate sets the maximum request rate (e.g., 100 requests per second).

2. Setting Rate Limits

burst defines the queue size, and nodelay disables delay between individual requests.

4. Caching

1. Browser/Static Resource Caching (expires)

Static resources can be cached on the client side using the expires directive.

2. Proxy Cache

Nginx can cache responses from upstream servers, reducing load on backend services.

5. Black/White Lists

1. Whitelist for Rate Limiting

Specific IPs can be exempted from rate limiting.

2. Blacklist

IP addresses can be blocked entirely.

In summary, Nginx provides static separation, load balancing, rate limiting, caching, and access‑control features that are essential for building robust, high‑performance web services.

backend developmentLoad BalancingCachingNginxWeb Serverrate limiting
Architect's Tech Stack
Written by

Architect's Tech Stack

Java backend, microservices, distributed systems, containerized programming, and more.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.