Building a High‑Concurrency Article Interaction System with Go, Redis, and Kafka

This article walks through the design and implementation of a production‑ready article interaction system—including likes, collections, and reads—using Go, Gin, Redis, MySQL, Kafka, and Vue, covering architecture, data modeling, high‑concurrency handling, idempotency, rate limiting, and reconciliation.

Ray's Galactic Tech
Ray's Galactic Tech
Ray's Galactic Tech
Building a High‑Concurrency Article Interaction System with Go, Redis, and Kafka

Business Scenario and Goals

The system serves content communities (blogs, technical forums, news platforms) where users can like, collect, and read articles. The UI must display real‑time like count, collect status, and the current user’s like/collect state.

Core Challenges

High concurrency – hot articles may receive tens of thousands of likes instantly.

Consistency – like counts must never become negative or diverge.

Idempotency – retries or MQ replay must not duplicate counts.

Anti‑fraud – prevent scripted mass‑liking.

Scalability – future extensions such as comments or shares.

Overall Architecture Design

Technology stack: Go + Gin (API layer), Redis (fast state & counters), MySQL (authoritative store), Kafka (asynchronous write‑off), Vue 3 (frontend).

Design Principles

Like = high‑concurrency write → Redis + async persistence

Collect = strong consistency → direct DB write

Read = massive behavior → async / batch processing

Core Data Model Design

1️⃣ Article Table (article)

CREATE TABLE article (
  id BIGINT PRIMARY KEY,
  title VARCHAR(255),
  content TEXT,
  view_count BIGINT DEFAULT 0,
  like_count BIGINT DEFAULT 0,
  collect_count BIGINT DEFAULT 0,
  created_at DATETIME
);

Note: Counting fields are not the source of truth and can be corrected later.

2️⃣ Like Record Table (user_like)

CREATE TABLE user_like (
  id BIGINT PRIMARY KEY AUTO_INCREMENT,
  user_id BIGINT NOT NULL,
  article_id BIGINT NOT NULL,
  status TINYINT NOT NULL,
  created_at DATETIME,
  UNIQUE KEY uk_user_article (user_id, article_id)
);

Unique index guarantees idempotency.

3️⃣ Collect Record Table (user_collect)

CREATE TABLE user_collect (
  id BIGINT PRIMARY KEY AUTO_INCREMENT,
  user_id BIGINT,
  article_id BIGINT,
  created_at DATETIME,
  UNIQUE KEY uk_user_article (user_id, article_id)
);

Like System: High‑Concurrency Core Design

Core Ideas

Redis stores real‑time state and counters.

MySQL stores the final truth.

Kafka decouples write pressure from the API.

Redis Key Design

article:like:{articleId}       → like count
user:like:{userId}:{articleId} → whether the user liked

Lua Script for Like (prevent race & negative counts)

-- KEYS[1] = article like count
-- KEYS[2] = user like status
-- ARGV[1] = action (1 like, 0 unlike)
local status = redis.call("GET", KEYS[2])
if ARGV[1] == "1" then
  if status ~= "1" then
    redis.call("SET", KEYS[2], 1)
    return redis.call("INCR", KEYS[1])
  end
else
  if status == "1" then
    redis.call("DEL", KEYS[2])
    local cnt = tonumber(redis.call("GET", KEYS[1]) or "0")
    if cnt > 0 then
      return redis.call("DECR", KEYS[1])
    end
  end
end
return redis.call("GET", KEYS[1])

Go Like API Implementation

func LikeArticle(c *gin.Context) {
    userID := c.GetInt64("userID")
    var req struct {
        ArticleID int64 `json:"article_id"`
        Action    int   `json:"action"`
    }
    c.ShouldBindJSON(&req)

    likeKey := fmt.Sprintf("article:like:%d", req.ArticleID)
    userKey := fmt.Sprintf("user:like:%d:%d", userID, req.ArticleID)

    _, err := rdb.Eval(ctx, likeLuaScript, []string{likeKey, userKey}, req.Action).Result()
    if err != nil {
        c.JSON(500, gin.H{"msg": "fail"})
        return
    }
    // asynchronous persistence
    sendLikeMQ(userID, req.ArticleID, req.Action)
    c.JSON(200, gin.H{"msg": "ok"})
}

Kafka Consumer: Idempotent Persistence

func ConsumeLike(msg LikeMessage) {
    db.Exec(`
        INSERT INTO user_like (user_id, article_id, status)
        VALUES (?, ?, ?)
        ON DUPLICATE KEY UPDATE status = VALUES(status)
    `, msg.UserID, msg.ArticleID, msg.Action)
}

MQ replay is safe; failures can be retried.

Collect System: Strong Consistency Implementation

Collect API

func CollectArticle(c *gin.Context) {
    userID := c.GetInt64("userID")
    var req struct {
        ArticleID int64 `json:"article_id"`
        Action    int   `json:"action"`
    }
    c.ShouldBindJSON(&req)

    if req.Action == 1 {
        db.FirstOrCreate(&UserCollect{}, UserCollect{UserID: userID, ArticleID: req.ArticleID})
        db.Model(&Article{}).Where("id=?", req.ArticleID).
            UpdateColumn("collect_count", gorm.Expr("collect_count + 1"))
    } else {
        db.Where("user_id=? AND article_id=?", userID, req.ArticleID).Delete(&UserCollect{})
        db.Model(&Article{}).Where("id=?", req.ArticleID).
            UpdateColumn("collect_count", gorm.Expr("collect_count - 1"))
    }
    c.JSON(200, gin.H{"msg": "ok"})
}

Reading Behavior: Distinguish PV, UV, Real Read

PV – incremented with Redis INCR.

UV – estimated using HyperLogLog.

Real read – counted only after the user stays on the page for ≥ 5 seconds.

Frontend Reporting

setTimeout(() => {
    reportRead(articleId)
}, 5000)

Anti‑Fraud and Rate Limiting

Redis Frequency Control

key := fmt.Sprintf("like:freq:%d:%d", userID, articleID)
if rdb.Exists(ctx, key).Val() > 0 {
    return errors.New("too fast")
}
rdb.Set(ctx, key, 1, 5*time.Second)

Gateway Throttling

Combine IP and UserID as a key.

Enforce a per‑minute like limit.

Offline Risk Control

Detect new accounts performing mass likes.

Identify batch actions from the same IP.

Data Reconciliation and Repair (Production Essential)

SELECT article_id, COUNT(*)
FROM user_like
WHERE status = 1
GROUP BY article_id;

A scheduled task periodically corrects Redis counters to guard against MQ loss.

Architecture Evolution Recommendations

When the business grows, split interaction functions into independent services while keeping a unified Redis + MQ entry point.

article-service
interaction-service
user-service

Separate services for like, collect, and read.

Redis + MQ remain the unified behavior gateway.

Conclusion

Like, collect, and read are not simple CRUD operations; they form a high‑concurrency behavior system. Key pattern: Redis for real‑time → Kafka for decoupling → MySQL for truth → reconciliation as safety net.
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

BackendRedisKafkaHigh ConcurrencyMySQLGin
Ray's Galactic Tech
Written by

Ray's Galactic Tech

Practice together, never alone. We cover programming languages, development tools, learning methods, and pitfall notes. We simplify complex topics, guiding you from beginner to advanced. Weekly practical content—let's grow together!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.