Designing Duplicate Request Filtering: Challenges, Solutions, and Best Practices
The article examines why duplicate request filtering is a complex backend problem, explores various causes, discusses client‑side and server‑side strategies such as request IDs, Redis checks, distributed locks, and request signing, and highlights practical pitfalls and security considerations.
Background
Duplicate requests can damage a system and are a hard‑to‑avoid design issue, especially for write operations; for example, a points‑exchange action may be processed twice, causing double deduction of user points and lengthy troubleshooting if logs are insufficient.
Product managers usually do not consider such anomalies when designing a system, but when they occur they blame developers, leading to extra effort in coding and maintenance.
The impact of repeated business requests can be significant, and their causes include:
Hackers intercepting and replaying requests.
Clients unintentionally sending the same request within a short time.
Middlewares (e.g., gateways) replaying the request.
Other unknown situations.
Although a diagram can illustrate the concept clearly, implementing the solution is far more complicated.
Client‑Side Handling
Clients can mitigate duplicate requests by showing a loading indicator or disabling the button after a click, or by using data structures such as Bloom filters combined with appropriate algorithms, but these measures are only temporary because the client is the least reliable part of the architecture.
Request Identifier
The most common practical solution is to attach a unique request ID to each call. The typical flow is:
The client generates a random request ID and sends it together with business parameters.
The server checks the received ID to determine whether the request is a duplicate.
Server‑side logic often stores the ID in Redis. Example pseudo‑code (C#/.NET Core):
public class Para
{
public string ReqId { get; set; }
// other business parameters
}
public bool IsExist(Para p)
{
// Use Redis to check if the key exists
bool isExist = redisMethod(p.ReqId);
// If it does not exist, treat as a new request and add to Redis
if (!isExist)
{
AddRedis(p.ReqId);
}
return isExist;
}While many articles stop here, this approach has hidden problems.
Problem 1
In distributed environments, two servers may simultaneously see the request ID as absent because the first server has not yet written the ID back to Redis, leading to the same request being processed twice.
Problem 2
Even with a unique request ID, an attacker can tamper with it and replay the request. Therefore, the ID must be protected by a security mechanism.
Business Signature
To ensure the integrity of the request ID, a common practice is to generate a signature (hash) from the business parameters, often using MD5 for simplicity. Example pseudo‑code:
// Client generates signature
string sign = MD5("param1=value1¶m2=value2&time=currentTimestamp");Including a timestamp helps define what constitutes a duplicate request (e.g., clicks within one second vs. ten seconds), but the exact rule depends on the specific business scenario.
Conclusion
Filtering duplicate requests is not as simple as adding a request ID; it involves security, distributed coordination, and sometimes performance and high‑availability concerns (e.g., in flash‑sale systems). Claims that a single request ID suffices are misleading—architecture must be driven by concrete business requirements.
In short: specific business logic dictates specific code implementation; separating architecture from business is ineffective.
Full-Stack Internet Architecture
Introducing full-stack Internet architecture technologies centered on Java
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.