Databases 7 min read

Three Redis-Based Rate Limiting Techniques: setnx, ZSet Sliding Window, and Token Bucket

This article explains three Redis-powered rate‑limiting methods—using SETNX for simple counters, leveraging ZSET for a sliding‑window algorithm, and implementing a token‑bucket scheme with LISTs—providing Java code examples, advantages, and practical considerations for high‑concurrency back‑end services.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Three Redis-Based Rate Limiting Techniques: setnx, ZSet Sliding Window, and Token Bucket

Facing increasingly high‑concurrency scenarios, rate limiting becomes crucial. Redis offers powerful capabilities, and the author demonstrates three straightforward implementations that can be applied to protect services.

Method 1: Using Redis SETNX

The SETNX command, often used for distributed locks, can also enforce a limit by setting a key with an expiration time. For example, allowing only 20 requests within 10 seconds can be achieved by calling SETNX with a 10‑second TTL; once the count reaches 20, further requests are blocked. This approach is simple but cannot handle sliding windows (e.g., counting requests from 2‑11 seconds).

Method 2: Using Redis ZSET (Sliding Window)

The sliding‑window problem is solved by storing each request as a unique member in a sorted set (ZSET), using the current timestamp as the score. By querying the range of scores within the desired interval, the system can determine how many requests occurred in the last N seconds. The following Java code shows how to add entries and check the count:

public Response limitFlow() {
    Long currentTime = new Date().getTime();
    System.out.println(currentTime);
    if (redisTemplate.hasKey("limit")) {
        Integer count = redisTemplate.opsForZSet()
            .rangeByScore("limit", currentTime - intervalTime, currentTime)
            .size();
        // intervalTime is the rate‑limit window
        System.out.println(count);
        if (count != null && count > 5) {
            return Response.ok("每分钟最多只能访问5次");
        }
    }
    redisTemplate.opsForZSet().add("limit", UUID.randomUUID().toString(), currentTime);
    return Response.ok("访问成功");
}

This implementation provides a sliding‑window effect and ensures at most M requests per N seconds, though the ZSET size grows over time.

Method 3: Using Redis List for Token Bucket

The token‑bucket algorithm controls the flow by maintaining a list of tokens. Each request attempts to pop a token from the left; if none are available, the request is rejected. Tokens are replenished periodically using a scheduled task that pushes unique UUIDs into the list:

// Output token
public Response limitFlow2(Long id) {
    Object result = redisTemplate.opsForList().leftPop("limit_list");
    if (result == null) {
        return Response.ok("当前令牌桶中无令牌");
    }
    return Response.ok(articleDescription2);
}
// Add a token every 10 seconds
@Scheduled(fixedDelay = 10_000, initialDelay = 0)
public void setIntervalTimeTask() {
    redisTemplate.opsForList().rightPush("limit_list", UUID.randomUUID().toString());
}

By integrating these snippets into AOP or servlet filters, developers can enforce rate limiting on APIs, protecting their applications.

Beyond rate limiting, Redis supports many other use cases such as caching, distributed locks, and advanced data structures like GeoHash, BitMap, HyperLogLog, and Bloom filters (available from Redis 4.0 onward).

BackendJavaRedisRate LimitingZsettoken bucketSETNX
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.