How to Stop Brute‑Force Logins with Nginx Rate Limiting
Learn how to protect your web application from brute‑force login attacks by configuring Nginx rate limiting, with step‑by‑step instructions, example configurations, testing methods, custom error pages, and best‑practice tips such as IP whitelisting, HTTPS enforcement, and complementary security measures.
Why Use Rate Limiting?
Prevent brute‑force login attempts by limiting repeated password tries from a single IP.
Protect server resources and avoid overload from malicious traffic.
Ensure fair resource usage so no single user or script can monopolize the server.
Enhance overall security as part of a multi‑layer defense with WAF, firewalls, and monitoring.
Nginx Rate‑Limiting Modules
limit_req_zone + limit_req : limits requests per second/minute (commonly used for login endpoints).
limit_conn_zone + limit_conn : limits concurrent connections (useful for downloads or streaming).
Step 1: Define a Rate‑Limiting Zone
Place the definition inside the http {} block to record request statistics per client IP.
http {
# Define a zone named "one"
# 10MB memory can store about 160,000 IPs
limit_req_zone $binary_remote_addr zone=one:10m rate=5r/m;
server {
listen 80;
server_name example.com;
location /login {
# Apply the zone "one"
limit_req zone=one burst=10 nodelay;
proxy_pass http://backend;
}
}
}Configuration Explanation
$binary_remote_addr: uses the binary representation of the client IP for efficient lookup. rate=5r/m: each IP may make up to 5 requests per minute. burst=10: allows a short burst of up to 10 extra requests before limiting. nodelay: excess requests are rejected immediately instead of being queued.
This setting is strict enough for login endpoints and dramatically reduces successful brute‑force attempts.
Step 2: Test the Limiting
Run a simple loop to issue multiple requests and observe the response.
for i in {1..20}; do curl -I http://example.com/login; doneWhen the limit is exceeded, Nginx returns:
HTTP/1.1 503 Service Temporarily UnavailableThis confirms that rate limiting is active.
Step 3: Custom Error Page
Instead of the default 503 page, you can return a friendlier message.
server {
listen 80;
server_name example.com;
error_page 503 @custom_limit;
location @custom_limit {
return 429 "请求太频繁,请稍后再试。";
}
location /login {
limit_req zone=one burst=10 nodelay;
proxy_pass http://backend;
}
}Users hitting the limit will now see the custom 429 response.
Step 4: Additional Optimizations & Tips
Login Interface
5 次/分钟 burst=5API Interface
30 次/秒(adjust to avoid false positives)
Static Resources
Typically no rate limiting is needed for CSS, JS, images, etc.
IP Whitelisting
location /login {
allow 192.168.1.100; # internal admin IP
deny all;
limit_req zone=one burst=10 nodelay;
proxy_pass http://backend;
}Special Reminder
Enforce HTTPS to prevent credential interception.
Combine with strong password policies and login‑failure limits.
Use CAPTCHA or MFA for additional verification.
Deploy tools like Fail2ban to auto‑block offending IPs based on logs.
Regularly monitor Nginx logs ( /var/log/nginx/access.log, /var/log/nginx/error.log).
Security is an ongoing process, not a one‑time setup.
Conclusion
Limit request frequency on login endpoints.
Block brute‑force attacks effectively.
Reduce the impact of malicious traffic on server stability.
Achieve a more secure and stable system with just a few configuration lines.
Full-Stack DevOps & Kubernetes
Focused on sharing DevOps, Kubernetes, Linux, Docker, Istio, microservices, Spring Cloud, Python, Go, databases, Nginx, Tomcat, cloud computing, and related technologies.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
