Understanding Delay Queues and Implementing Them with JDK, RabbitMQ, Redis, and lmstfy
This article explains the concept, use cases, and common implementations of delay queues—including JDK's DelayQueue, RabbitMQ plugins, and Redis ZSet—followed by a detailed guide to installing, configuring, and using the open‑source lmstfy project, and concludes with promotional information for ChatGPT‑related services.
Delay queues (Delay Queue) are a mechanism that stores messages and delivers them to consumers after a specified delay, unlike traditional message queues that push messages immediately.
How Delay Queues Work
Messages are stored with a future delivery timestamp; they are only dispatched to consumers when the delay expires, ensuring tasks are executed at the intended time.
Typical Use Cases
Cancel orders that remain unpaid for over 30 minutes on e‑commerce platforms.
Auto‑confirm a purchase three days after receipt.
Send reminder SMS to users who registered but have not logged in within 30 days.
Notify meeting participants ten minutes before a scheduled meeting.
Remind users 30 minutes before a flash‑sale starts.
These scenarios share the need to perform an action at a specific time relative to an event, which delay queues can solve.
Common Implementation Methods
Programmatic implementation, e.g., Java's built‑in DelayQueue .
Message‑queue frameworks, e.g., RabbitMQ with the rabbitmq-delayed-message-exchange plugin.
Redis‑based implementation using sorted sets (ZSet) to store delayed timestamps.
JDK Built‑in DelayQueue
Advantages
Easy to use directly in code.
Simple implementation.
Disadvantages
Does not support persistence.
Not suitable for distributed systems.
MQ Implementation (RabbitMQ)
RabbitMQ does not natively support delay queues, but the rabbitmq-delayed-message-exchange plugin enables them.
Advantages
Supports distributed deployment.
Provides persistence.
Disadvantages
The framework is relatively heavy and requires setup and configuration.
Redis Implementation
Redis implements delayed messages via a sorted set (ZSet) where the score stores the execution timestamp.
Advantages
Flexible; Redis is a standard component in many internet companies.
Supports message persistence, improving reliability.
Provides distributed support, unlike JDK's DelayQueue .
High availability by leveraging Redis's own HA solutions.
Disadvantages
Requires a continuous loop to check for due tasks, consuming a small amount of system resources.
lmstfy
lmstfy (let me schedule task for you) is an open‑source delay‑queue project by Meitu, built with Go and backed by Redis. It is lightweight, resource‑efficient, and has been proven in Meitu's high‑traffic production environment.
Features of lmstfy
Basic queue operations: publish, consume, delete.
Message TTL with automatic expiration.
Delayed consumption.
Automatic retry.
Dead‑letter queue.
Namespace isolation and per‑queue Prometheus monitoring with Grafana dashboards.
Publish/consume rate limiting.
How lmstfy Works
Messages are published to lmstfy; if the delay > 0 they are placed in Redis ZSet (score = absolute delay time). When the delay expires, the message moves to the ready queue. Consumers always pull from the ready queue. A timer checks the ZSet every second to move due messages. If a message exceeds its max retry count, it is moved to a dead‑letter queue, which can be revived or deleted.
lmstfy Server Installation & Deployment
Prerequisite
lmstfy depends on Redis. Install Redis and configure persistence with AOF and set the eviction policy to noeviction to avoid data loss when memory is full.
Redis Configuration
# Persistent storage set to AOF
appendonly yes
# Memory eviction policy set to noeviction
maxmemory-policy noeviction
# Basic auth accounts for admin API
[Accounts]
test_user = "change.me"
[AdminRedis] # Redis used to store admin data
Addr = "localhost:6379"
[Pool]
[Pool.default]
Addr = "localhost:6379"
# Default parameters (examples)
#TTLSecond = 24*60*60 // 1 day
#DelaySecond = 0
#TriesNum = 1
#TTRSecond = 2*60 // 2 minutes
#TimeoutSecond = 0 // non‑blockingCompile Binary
# Download source code
git clone https://github.com/bitleak/lmstfy.git
# Enter project directory
cd lmstfy
# Build the binary (output placed in _build directory)
makeStart lmstfy Server
_build/lmstfy-server -c config/demo-conf.tomlAfter the server starts, you can interact with it via the client library.
lmstfy Client Usage
Obtain a Token
Tokens are scoped by namespace. If basic‑auth is enabled, include the credentials in the request.
curl --location 'http://127.0.0.1:7778/token/kb-test' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Authorization: Basic dGVzdF91c2VyOmNoYW5nZS5tZQ==' \
--data-urlencode 'description=test'Response example:
{
"token": "01HQJ7EQBDZT88JH79BDE4FAYC"
}Producer Example (Go)
package main
import (
"fmt"
"github.com/bitleak/lmstfy/client"
)
func main() {
c := client.NewLmstfyClient("127.0.0.1", 7776, "kb-test", "01HQJ7EQBDZT88JH79BDE4FAYC")
c.ConfigRetry(3, 50) // optional retry config
jobId, err := c.Publish("test", []byte("test"), 100, 3, 30)
if err == nil {
fmt.Println("Message sent successfully", jobId)
}
}Consumer Example (Go)
package main
import (
"fmt"
"github.com/bitleak/lmstfy/client"
)
func main() {
c := client.NewLmstfyClient("127.0.0.1", 7776, "kb-test", "01HQJ7EQBDZT88JH79BDE4FAYC")
c.ConfigRetry(3, 50)
for {
job, err := c.Consume("test", 6, 3)
if err != nil {
panic(err)
}
if job != nil {
fmt.Println(string(job.Data))
// Acknowledge successful processing
if err1 := c.Ack("test", job.ID); err1 == nil {
fmt.Println("Message processed and acknowledged")
}
}
}
}Other language examples are available in the lmstfy GitHub repository.
Promotional Content
The article also includes promotional material for ChatGPT‑related services, a knowledge‑sharing community, and various paid offerings. Readers are invited to join the community, obtain exclusive resources, and purchase subscription plans.
Top Architect
Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.