Mastering Nginx Rate Limiting: From Basics to Advanced Configurations
This guide explains how Nginx implements rate limiting using the leaky‑bucket algorithm, walks through basic and advanced configurations—including limit_req_zone, limit_req, burst, nodelay, whitelisting, multiple limits, logging, and custom status codes—while providing concrete code examples and practical tips.
Rate limiting (rate‑limiting) in Nginx is a powerful feature that controls how many HTTP requests a client can make within a given time window, useful for security, DDOS mitigation, and protecting upstream services.
The mechanism relies on the leaky‑bucket algorithm: incoming requests fill a bucket at the "inlet" rate, while the server drains the bucket at a fixed "outlet" rate; excess requests overflow and are dropped.
Basic Rate‑Limiting Configuration
The two core directives are limit_req_zone and limit_req:
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;</code>
<code>server {</code>
<code> location /login/ {</code>
<code> limit_req zone=mylimit;</code>
<code> proxy_pass http://my_upstream;</code>
<code> }</code>
<code>} limit_req_zonedefines a shared memory zone (here mylimit) that stores per‑IP state; it requires a key (e.g., $binary_remote_addr), a zone name and size, and a rate (maximum requests per second). The limit_req directive activates the limit in a specific context such as a location.
If the shared memory runs out of space, Nginx discards old entries and may return a 503 status when it cannot allocate a new slot.
Handling Bursts
To allow short spikes, add the burst parameter:
location /login/ {</code>
<code> limit_req zone=mylimit burst=20;</code>
<code> proxy_pass http://my_upstream;</code>
<code>}With burst=20, up to 20 excess requests are queued; Nginx forwards one request every 100 ms (the configured rate) and returns 503 only when the queue exceeds 20.
Zero‑Delay Queuing
Adding nodelay makes queued requests pass through immediately as long as a slot is available, avoiding the 2‑second wait for the 20th request in the previous example:
location /login/ {</code>
<code> limit_req zone=mylimit burst=20 nodelay;</code>
<code> proxy_pass http://my_upstream;</code>
<code>}When a request arrives early, Nginx forwards it instantly if the burst queue has free positions; the position is held for the rate‑limited interval (100 ms) before being released.
Advanced Example: Whitelisting
The following configuration exempts IPs in a whitelist from rate limiting while applying a stricter limit to all others:
geo $limit {</code>
<code> default 1;</code>
<code> 10.0.0.0/8 0;</code>
<code> 192.168.0.0/24 0;</code>
<code>}</code>
<code>map $limit $limit_key {</code>
<code> 0 "";</code>
<code> 1 $binary_remote_addr;</code>
<code>}</code>
<code>limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;</code>
<code>server {</code>
<code> location / {</code>
<code> limit_req zone=req_zone burst=10 nodelay;</code>
<code> # ...</code>
<code> }</code>
<code>}When $limit_key is empty (whitelisted IP), the request bypasses the limit; otherwise the 5 r/s limit applies.
Multiple limit_req Directives in One Location
Multiple limits can be combined; the most restrictive one wins:
http {</code>
<code> limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;</code>
<code> limit_req_zone $binary_remote_addr zone=req_zone_wl:10m rate=15r/s;</code>
<code> server {</code>
<code> location / {</code>
<code> limit_req zone=req_zone burst=10 nodelay;</code>
<code> limit_req zone=req_zone_wl burst=20 nodelay;</code>
<code> }</code>
<code> }</code>
<code>}Whitelisted IPs match the second zone (15 r/s), while others are limited to 5 r/s.
Logging and Status Customization
By default Nginx logs rate‑limited events at error level:
2015/06/13 04:20:00 [error] 120315#0: *32086 limiting requests, excess: 1.000 by zone "mylimit", client: 192.168.1.2, server: nginx.com, request: "GET / HTTP/1.0", host: "nginx.com"The log entry includes fields such as limiting requests , excess , zone , client , server , request , and host . The log level can be changed with limit_req_log_level:
location /login/ {</code>
<code> limit_req zone=mylimit burst=20 nodelay;</code>
<code> limit_req_log_level warn;</code>
<code> proxy_pass http://my_upstream;</code>
<code>}To return a different HTTP status instead of the default 503, use limit_req_status (e.g., 444):
location /login/ {</code>
<code> limit_req zone=mylimit burst=20 nodelay;</code>
<code> limit_req_status 444;</code>
<code>}If you need to block all traffic to a specific URL, combine deny all with a location block:
location /foo.php {</code>
<code> deny all;</code>
<code>}Conclusion
The article covered Nginx’s rate‑limiting capabilities, including basic limit_req_zone / limit_req usage, burst and nodelay handling, whitelist/blacklist scenarios, multiple concurrent limits, logging details, and custom response codes, providing a comprehensive reference for securing and stabilizing web services.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Liangxu Linux
Liangxu, a self‑taught IT professional now working as a Linux development engineer at a Fortune 500 multinational, shares extensive Linux knowledge—fundamentals, applications, tools, plus Git, databases, Raspberry Pi, etc. (Reply “Linux” to receive essential resources.)
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
