Databases 17 min read

MongoDB Architecture and Performance Optimizations for Tencent Cloud K‑Song Feed Service

The article details how Tencent Cloud’s K‑Song feed service, serving over 150 million daily users, was engineered with a read‑expansion model, cached results, auxiliary index tables, hashed sharding, write‑concern tuning, disabled chain replication, and WiredTiger and backup optimizations, achieving sub‑10 ms write latency and significantly lower CPU and slow‑query rates.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
MongoDB Architecture and Performance Optimizations for Tencent Cloud K‑Song Feed Service

This document presents the design, challenges, and a series of optimizations applied to the MongoDB‑based backend of Tencent Cloud's K‑Song (全民K歌) product, which serves over 150 million daily active users across multiple industries.

Business Characteristics

Massive fan‑base with millions of followers creates heavy relationship‑graph expansion and high write amplification.

Feed stream requires fine‑grained control: VIP exposure limits, anti‑spam, friend‑merged feeds, real‑time insertion, low‑quality feed throttling, etc.

Read‑Write Model Selection

The service adopts a read‑distribution (read‑expansion) model because most operations are read‑heavy and the write load from large‑V users would otherwise cause high latency and storage cost.

Drawbacks of the read‑expansion model include:

Page‑turning incurs increasing scan cost as the timeline grows.

Global filtering, insertion and frequency‑control become CPU‑intensive with millions of followers.

Read‑Expansion Optimizations

Cache the read‑expansion results to avoid repeated heavy scans.

Introduce an auxiliary index table (FeedId_userId_relationship) to resolve cross‑shard queries without broadcasting.

Use Redis + Lua scripts to store fan‑count aggregates, avoiding slow count operations.

Example of the auxiliary index query:

db.FeedId_userId_relationship.find({FeedId:"12345"})

MongoDB Sharding Strategy

Both Feed detail and follower tables are sharded on userId (hashed) with pre‑splitting to 8192 × shardCount initial chunks:

sh.shardCollection("mydb.follower", {userId:"hashed"}, false, {numInitialChunks:8192*shardCount})
sh.shardCollection("mydb.FeedInfo", {FeedId:"hashed"}, false, {numInitialChunks:8192*shardCount})

This ensures balanced writes and enables point queries to hit a single shard.

Write‑Concern Tuning

{w:0} – highest throughput, no durability.

{w:1} – default, acknowledges primary write.

{w:"majority"} – guarantees durability across majority of replicas.

For K‑Song, the majority write‑concern caused latency spikes due to chain replication; disabling chaining reduced write latency to <10 ms.

Chain Replication Management

Chain replication (A→B→C) was disabled via:

cfg = rs.config(); cfg.settings.chainingAllowed = false; rs.reconfig(cfg)

Result: stable write latency and reduced CPU load.

Connection‑Pool and Cache Tuning

Unified connection pool upper and lower limits to cut connection‑setup overhead.

Adjusted WiredTiger eviction settings: eviction_target=60 , eviction_trigger=97 , increased eviction threads to 20, lowering peak slow‑query rate from 150k/min to 20k/min.

Backup‑Induced Performance Impact

Full and incremental backups at midnight caused CPU spikes, higher latency, and increased slow‑query logs. The mitigation was to hide backup nodes from clients during the backup window, ensuring they are not part of the read path.

Overall Benefits

Reduced duplicate computation via cache and auxiliary indexes.

Global feed view enables unified policy enforcement.

Write latency stabilized under 10 ms after disabling chain replication.

Backup‑time service jitter eliminated.

The presented optimizations illustrate practical database engineering techniques for large‑scale, high‑throughput social feed systems.

performance optimizationCacheShardingMongoDBTencent Cloudfeed
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.