Scale a Monolithic Article Interaction Service with Kubernetes Microservices
This article walks through converting a single‑service article interaction module—handling likes, favorites, and reads—into independent microservices deployed on Kubernetes, detailing architecture goals, service separation, Redis‑based high‑concurrency handling, Kafka async persistence, deployment configurations, auto‑scaling, and real‑world performance results.
Background and Problem
In the original monolithic architecture the article service handled both content delivery and user interaction (likes, favorites, read statistics). As traffic grew, write‑heavy interaction caused high concurrency, inaccurate counters, difficulty scaling the interaction part independently, and made risk control hard.
Architecture Upgrade Goals
Deploy the interaction subsystem as an independent service.
Isolate high‑concurrency write traffic from article read traffic.
Provide eventual consistency with auditability for interaction data.
Enable independent scaling on Kubernetes.
Microservice Decomposition
The system is split behind an API gateway into two services:
Gateway Service
├─ Article Service → article content (read‑heavy)
└─ Interaction Service → likes / favorites / reads (write‑heavy)Interaction Service Design
Responsibility Boundary
Like – responsible
Collect (favorite) – responsible
Read – responsible
Article content – not responsible
User system – not responsible
Project Structure (Gin)
interaction-service/
├── cmd/main.go
├── internal/
│ ├── api/ # HTTP interfaces
│ ├── service/ # Business logic
│ ├── cache/ # Redis operations
│ ├── mq/ # Kafka integration
│ └── dao/ # Database access
├── configs/
├── Dockerfile
└── deploy/High‑Concurrency Like System
Redis Key Design
article:like:{articleId} → total like count
user:like:{userId}:{articleId} → per‑user like status (1 = liked, absent = not liked)Atomic Lua Script
-- KEYS[1] = article like count key
-- KEYS[2] = user like status key
-- ARGV[1] = "1" for like, "0" for unlike
local status = redis.call("GET", KEYS[2])
if ARGV[1] == "1" then
if status ~= "1" then
redis.call("SET", KEYS[2], 1)
return redis.call("INCR", KEYS[1])
end
else
if status == "1" then
redis.call("DEL", KEYS[2])
local cnt = tonumber(redis.call("GET", KEYS[1]) or "0")
if cnt > 0 then
return redis.call("DECR", KEYS[1])
end
end
end
return redis.call("GET", KEYS[1])Gin Like Handler
func LikeArticle(c *gin.Context) {
userID := c.GetInt64("userID")
articleID := c.Param("id")
likeKey := "article:like:" + articleID
userKey := fmt.Sprintf("user:like:%d:%s", userID, articleID)
_, err := rdb.Eval(ctx, likeLua, []string{likeKey, userKey}, 1).Result()
if err != nil {
c.JSON(500, gin.H{"msg": "fail"})
return
}
// async persistence
sendLikeMQ(userID, articleID, 1)
c.JSON(200, gin.H{"msg": "ok"})
}Kafka Asynchronous Persistence (Idempotent)
The like event is sent to a Kafka topic; the consumer writes the record to MySQL using an upsert to guarantee idempotency.
func ConsumeLike(msg LikeMessage) {
db.Exec(`
INSERT INTO user_like (user_id, article_id, status)
VALUES (?, ?, ?)
ON DUPLICATE KEY UPDATE status = VALUES(status)
`, msg.UserID, msg.ArticleID, msg.Action)
}Collect System (Strong Consistency)
Favorites are less frequent, so they are written directly to the database with a transactional increment of the article’s collect count.
func CollectArticle(c *gin.Context) {
userID := c.GetInt64("userID")
articleID := c.Param("id")
db.FirstOrCreate(&UserCollect{}, UserCollect{UserID: userID, ArticleID: articleID})
db.Model(&Article{}).Where("id = ?", articleID).
UpdateColumn("collect_count", gorm.Expr("collect_count + 1"))
c.JSON(200, gin.H{"msg": "ok"})
}Reading Behavior (PV / UV / Read)
PV : simple INCR on a Redis key per article.
UV : use Redis HyperLogLog to approximate unique visitors.
Read : report a read only after the user stays on the page for at least 5 seconds.
Frontend Example
setTimeout(() => {
reportRead(articleId)
}, 5000)Kubernetes Deployment Design
Interaction Service Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: interaction-service
spec:
replicas: 6
selector:
matchLabels:
app: interaction
template:
metadata:
labels:
app: interaction
spec:
containers:
- name: interaction
image: interaction-service:v1
ports:
- containerPort: 8080Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
spec:
minReplicas: 4
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60Operational Observations
Hot articles can generate > 20 k likes per second; Redis handles the write burst.
Interaction Service scales automatically via HPA, while the Article Service remains unaffected.
Anti‑fake‑like measures include gateway rate limiting, Redis‑based frequency control, and Kafka‑based traceability for post‑attack data correction.
Conclusion
Treating the interaction subsystem as an independent, high‑concurrency component—backed by Redis for fast counters, Kafka for decoupled persistence, MySQL for durable facts, and Kubernetes for elastic scaling—solves the scalability and reliability problems of the original monolith.
Key takeaway: likes, favorites, and reads = Redis for concurrency + MQ for decoupling + DB for facts + K8s for scaling.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Ray's Galactic Tech
Practice together, never alone. We cover programming languages, development tools, learning methods, and pitfall notes. We simplify complex topics, guiding you from beginner to advanced. Weekly practical content—let's grow together!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
