Mastering Nginx Rate Limiting: From Basics to Advanced Configurations

This article explains how Nginx implements rate limiting using the leaky‑bucket algorithm, walks through basic and advanced configurations—including zones, burst, nodelay, whitelists, multiple limits, logging, custom status codes, and request denial—while providing complete configuration examples.

Open Source Linux
Open Source Linux
Open Source Linux
Mastering Nginx Rate Limiting: From Basics to Advanced Configurations

Rate limiting (rate‑limiting) is a practical feature in Nginx that is often misunderstood and misconfigured. It allows you to restrict the number of HTTP requests a client can make within a given time window, whether the request is a simple GET for a homepage or a POST to a login form.

Nginx Rate‑Limiting Mechanism

Nginx uses the leaky‑bucket algorithm, widely employed in networking to handle burst traffic when bandwidth is limited. Imagine a bucket that fills faster than it leaks; excess water overflows, just as excess requests are dropped.

Basic Rate‑Limiting Configuration

The two main directives are limit_req_zone and limit_req:

limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;</code>
<code>server {</code>
<code>    location /login/ {</code>
<code>        limit_req zone=mylimit;</code>
<code>        proxy_pass http://my_upstream;</code>
<code>    }</code>
<code>}
limit_req_zone

defines the shared memory zone and parameters (key, zone, rate). limit_req activates the limit in a specific context, such as a location.

Key : the variable used to identify a client (e.g., $binary_remote_addr).

Zone : shared memory area storing request counters; size determines how many IP states can be kept.

Rate : maximum request rate (e.g., 10 requests per second, which Nginx tracks in 100 ms intervals).

If Nginx runs out of space for new entries, it removes old ones; if still insufficient, it returns a 503 status.

Handling Bursts

When multiple requests arrive within the 100 ms window, the burst parameter allows excess requests to be queued instead of immediately rejected:

location /login/ {</code>
<code>    limit_req zone=mylimit burst=20;</code>
<code>    proxy_pass http://my_upstream;</code>
<code>}

The queue can hold up to 20 extra requests; additional requests beyond the queue receive a 503.

Zero‑Delay Queuing

Adding the nodelay flag makes queued requests pass through immediately, while still respecting the configured rate:

location /login/ {</code>
<code>    limit_req zone=mylimit burst=20 nodelay;</code>
<code>    proxy_pass http://my_upstream;</code>
<code>}

With nodelay, Nginx forwards a request as soon as a slot in the burst queue is available, marking the slot as taken until it expires (e.g., after 100 ms).

Advanced Example: Whitelisting

The following configuration exempts IP ranges defined in a whitelist from rate limiting while applying a stricter limit to all other clients:

geo $limit {</code>
<code>    default         1;</code>
<code>    10.0.0.0/8      0;</code>
<code>    192.168.0.0/64  0;</code>
<code>}</code>
<code>map $limit $limit_key {</code>
<code>    0 "";</code>
<code>    1 $binary_remote_addr;</code>
<code>}</code>
<code>limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;</code>
<code>server {</code>
<code>    location / {</code>
<code>        limit_req zone=req_zone burst=10 nodelay;</code>
<code>        # ...</code>
<code>    }</code>
<code>}

Clients in the whitelist receive an empty key, so no limit is applied; all others are limited to 5 requests per second.

Multiple limit_req Directives in One Location

When several limit_req directives match a request, the most restrictive limit wins. Example:

http {</code>
<code>    limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;</code>
<code>    limit_req_zone $binary_remote_addr zone=req_zone_wl:10m rate=15r/s;</code>
<code>    server {</code>
<code>        location / {</code>
<code>            limit_req zone=req_zone burst=10 nodelay;</code>
<code>            limit_req zone=req_zone_wl burst=20 nodelay;</code>
<code>        }</code>
<code>    }</code>
<code>}

Whitelisted IPs match the second zone (15 r/s), while others are limited to the stricter 5 r/s.

Logging

Nginx logs limited requests by default at the error level:

2015/06/13 04:20:00 [error] 120315#0: *32086 limiting requests, excess: 1.000 by zone "mylimit", client: 192.168.1.2, server: nginx.com, request: "GET / HTTP/1.0", host: "nginx.com"

Use limit_req_log_level to change the log level, e.g.:

location /login/ {</code>
<code>    limit_req zone=mylimit burst=20 nodelay;</code>
<code>    limit_req_log_level warn;</code>
<code>    proxy_pass http://my_upstream;</code>
<code>}

Custom Error Codes

By default, exceeded requests receive a 503 status. You can change this with limit_req_status:

location /login/ {</code>
<code>    limit_req zone=mylimit burst=20 nodelay;</code>
<code>    limit_req_status 444;</code>
<code>}

Blocking All Requests to a Location

To deny every request to a specific URL, use the deny all directive:

location /foo.php {</code>
<code>    deny all;</code>
<code>}

Summary

The article covered Nginx’s rate‑limiting capabilities, including basic zone and request directives, burst and nodelay handling, whitelist/blacklist configurations, multiple limits, logging, custom status codes, and outright request denial, providing a comprehensive guide for controlling request flow and protecting upstream services.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

securityNginxleaky bucketbackend configurationlimit_req
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.