How Tencent News Cut PUSH Platform Code by 87% and Boosted Performance 3.5×

The article details how Tencent News' PUSH platform was re‑architected—consolidating modules, unifying the tech stack to Go, building an in‑house message channel, and introducing batch IO and priority scheduling—resulting in a 70% cost cut, 3.5‑fold throughput increase, and dramatically lower latency.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
How Tencent News Cut PUSH Platform Code by 87% and Boosted Performance 3.5×

Introduction

Tencent News PUSH platform distributes premium news, serving as a key user activation channel. The team achieved major technical gains: reducing code from 680k lines to 86k, rewriting most C++ modules in Go, and addressing over‑micro‑service issues.

Old Architecture Issues

Excessively long module chain (up to 18 modules, 17 RPC hops) causing latency.

Service dependencies bottlenecked by slow audience‑package fetching.

Poor fault tolerance and lack of automatic failover.

No priority distinction between manual and automated pushes.

Mixed C++ and Go stacks hindered code reuse.

Low testing efficiency and long lead‑time for changes.

Optimization Solutions

Full‑link Business Closure

Built an in‑house message channel, merged 15 modules into 6, cut code to 86k lines, and unified interfaces, raising registration success from 90% to 99.9%.

Unified Technology Stack

Rewrote all non‑recommendation modules in Go, eliminating C++ components.

Link Consolidation

Combined trigger and scheduling modules, reduced RPC hops from 17 to 2, and streamlined data flow.

Number‑Package Service

Implemented a custom paging service for audience packages, removing the external bottleneck.

Offline Pre‑Filtering

Split audience packages by system/brand, moving filtering offline to cut online latency.

Batch IO Aggregation

Wrapped single IO calls into asynchronous queues, enabling batch processing and higher throughput.

Priority Scheduling

Introduced task priority queues and user‑level ordering to ensure hot news reaches users first.

Automatic Fault Recovery

Added backup nodes for sharding and LRU cache monitoring to automatically reroute traffic on node failure.

Automated Testing

Created regression and diff testing pipelines that replay recorded traffic with captured dependency data.

Results

Operational cost reduced by ~70%.

Push throughput increased 3.5×.

Hot‑push latency (P90) dropped 90%.

Click PV for hot pushes rose 10% and overall UV improved.

Zero incidents after February 2025.

Architecture Overview
Architecture Overview
PerformanceMicroservicesscalabilityGolangpush
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.