Designing a 100k RPS Like/Read Counter with Redis and MySQL
This article explains how to build a high‑throughput like and read counter that can handle 100,000 requests per second by layering Redis caching, optional AOF or MySQL persistence, cleaning invalid client requests, and separating business, statistics, and reporting logic.
Introduction
The previous post covered a flash‑sale system that needed to process 100 k requests per second. This follow‑up focuses on a simpler scenario: counting likes and reads, which are cumulative and have no hard upper bound, allowing a more straightforward architecture.
Overall Architecture
First layer – Redis cache : Store the current like/read counts in Redis for real‑time read/write. Redis’s high‑throughput capabilities handle the massive traffic.
Second layer – Persistence : Use MySQL for durable storage, updating it asynchronously to achieve eventual consistency, or alternatively enable Redis‑cluster AOF persistence.
Third layer – Responsibility separation : Isolate business write/read logic from statistics aggregation and backend reporting.
Cleaning Invalid Requests at the Front End
To reduce unnecessary load on the server, filter out invalid requests on the client side:
Unauthenticated request interception
Duplicate submission prevention (disable button until response or 5 s timeout)
Rate‑limit enforcement (e.g., max 100 submissions per minute per user)
CAPTCHA verification to block bots
Eligibility checks (user level, registration age, blacklist)
Redis Operations for High‑Volume Counters
Redis provides atomic increment/decrement commands that are ideal for counters.
Like Counter
# Get existing like count
GET thumbs_up
# Increment by one
INCRBY thumbs_up 1
# Decrement (cancel like)
DECRBY thumbs_up 1Read Counter
# Get existing read count
GET reads
# Increment by one
INCRBY reads 1Persistence Strategy
The second layer uses MySQL for durable storage. Updates are performed asynchronously: the Redis cache is updated immediately, and a background job periodically writes the accumulated counts to MySQL, preserving eventual consistency while keeping latency low for read/write operations.
Separation of Business and Statistics Logic
Business write logic: handles user like/read actions and updates Redis.
Business read logic: serves read requests directly from Redis for optimal performance.
Statistics logic: periodically extracts counts from Redis and syncs them to MySQL.
Backend reporting logic: reads aggregated data from MySQL for dashboards and reports.
Key Takeaways
Use caching (Redis) to absorb ultra‑high concurrent read/write traffic.
Adopt a micro‑service‑style separation where counting, business processing, and reporting are decoupled.
Persist data asynchronously to a relational store (MySQL) to achieve eventual consistency without sacrificing latency.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Architecture & Thinking
🍭 Frontline tech director and chief architect at top-tier companies 🥝 Years of deep experience in internet, e‑commerce, social, and finance sectors 🌾 Committed to publishing high‑quality articles covering core technologies of leading internet firms, application architecture, and AI breakthroughs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
