Understanding Workflow Engines, Message Queues, and Redis: Core Concepts and Practical Guidance
This article explains the fundamentals of workflow engines, compares process and rule engines, discusses why and how to use message queues (including delay levels and popular MQ products), and details Redis data structures and memory‑eviction policies, providing practical guidance for backend system design.
What Is a Workflow?
According to Wikipedia, a workflow (Workflow) abstracts and describes the business rules governing a series of work steps, modeling how tasks are organized and executed by a computer to achieve a business goal.
In simple terms, a workflow defines the sequence and rules for completing a task, such as leave approval, order processing, or expense reimbursement.
Why Use a Workflow Engine?
Without a workflow engine, each step must be hard‑coded, leading to complex, hard‑to‑maintain code and difficulty adapting to changes.
Code complexity and maintenance difficulty: Every business step is implemented in code, making modifications costly and error‑prone.
Low adaptability: Adding or reordering steps requires code changes across the system.
A workflow engine solves these problems by providing a visual designer where processes are defined as nodes and connections, allowing modifications without touching code and enabling rapid adaptation.
Process Engine vs. Rule Engine
Process Engine
The core of a process engine is to define and execute business processes, focusing on the flow of activities among participants.
Standardized process definition ensures consistency and reduces human error.
Real‑time monitoring tracks each step’s progress and identifies bottlenecks.
Automation reduces manual intervention , improving efficiency.
Lower development and maintenance cost through visual tools that let business users design processes.
Typical use cases include leave approval, reimbursement, and order processing.
Rule Engine
A rule engine extracts decision logic from application code, using declarative rule definitions (e.g., decision tables, trees) to evaluate input data and trigger actions.
Manage frequently changing logic by externalizing rules.
Reduce coupling between business logic and code , making the codebase cleaner.
Support complex decision making through rule composition.
Increase decision efficiency by automating evaluation.
Examples include loan approval, fraud detection, ad‑placement strategies, and discount calculation.
Why Use a Message Queue (MQ)?
Message queues bring three main benefits:
Asynchronous processing improves system performance by reducing response time.
Peak‑shaving / rate limiting smooths traffic spikes.
Reduced system coupling – producers and consumers interact only via the queue.
In a typical e‑commerce scenario, order creation publishes a message; downstream services (payment, inventory, shipping, notification, risk control) consume it independently, achieving decoupling and scalability.
MQ Delay Levels (RocketMQ Example)
RocketMQ 4.x supports 18 predefined delay levels (e.g., 1 s, 5 s, …, 2 h). The table below lists them:
Delay Level
Delay Time
Delay Level
Delay Time
1
1s
10
6min
2
5s
11
7min
3
10s
12
8min
4
30s
13
9min
5
1min
14
10min
6
2min
15
20min
7
3min
16
30min
8
4min
17
1h
9
5min
18
2h
RocketMQ 5.0 introduces timer‑based delayed messages using a time‑wheel algorithm, overcoming the fixed‑level limitation.
Common Message Queue Products
Kafka
Kafka is a distributed streaming platform (originally from LinkedIn) that provides message queuing, durable storage, and stream processing. It transitioned from Zookeeper to the Raft‑based KRaft mode in version 2.8, simplifying deployment.
RocketMQ
RocketMQ, an Apache top‑level project from Alibaba, offers cloud‑native messaging, high throughput, stream processing, and strong reliability for financial‑grade scenarios.
RabbitMQ
RabbitMQ implements AMQP, offering reliability, flexible routing, clustering, high availability, multi‑protocol support, and a user‑friendly management UI.
Pulsar
Pulsar is a cloud‑native distributed messaging and streaming platform (originating from Yahoo) with multi‑tenant support, tiered storage, and serverless functions.
ActiveMQ
ActiveMQ is considered obsolete and is not recommended for new projects.
Why Use Redis?
Redis provides ultra‑fast in‑memory access, dramatically increasing read/write speed compared to disk‑based databases, and supports high QPS (tens of thousands per second) for caching, distributed locks, rate limiting, and even message queues.
Redis Data Structures
Redis offers five basic types (String, List, Set, Hash, Zset) and three special types (HyperLogLog, Bitmap, Geospatial). Internally these are implemented using structures such as SDS, LinkedList, Dict, SkipList, Intset, ZipList, and QuickList.
String
List
Hash
Set
Zset
SDS
LinkedList/ZipList/QuickList
Dict, ZipList
Dict, Intset
ZipList, SkipList
Redis Memory Eviction Policies
When memory reaches the maxmemory limit (configured in redis.conf ), Redis applies one of several eviction policies:
volatile-lru : evicts least‑recently‑used keys with an expiration.
volatile-ttl : evicts keys that are about to expire.
volatile-random : evicts random expired keys.
allkeys-lru : evicts least‑recently‑used keys regardless of expiration.
allkeys-random : evicts random keys.
no-eviction : rejects writes when memory is full (default).
volatile-lfu : evicts least‑frequently‑used expired keys (Redis 4.0+).
allkeys-lfu : evicts least‑frequently‑used keys overall (Redis 4.0+).
Configuration commands:
> config get maxmemory
maxmemory
0 > config get maxmemory-policy
maxmemory-policy
noevictionTo change the policy at runtime:
config set maxmemory-policy allkeys-lruFor permanent changes, edit redis.conf and restart the server.
Further details can be found in the official Redis documentation.
IT Services Circle
Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.