Design and Evolution of Vivo Live‑Streaming IM Message System
Vivo’s live‑streaming IM system combines short‑polling and long‑connection techniques, Redis SortedSet storage, priority‑based routing, protobuf compression, and discard strategies to deliver a fault‑tolerant, high‑concurrency messaging backbone that scales with growing traffic and diverse message types.
Background : In Vivo’s live‑streaming platform, the instant‑message (IM) system is a core component. A stable, fault‑tolerant, high‑concurrency message module directly impacts user experience. The article introduces the message model, its architecture, and the evolutionary upgrades performed over a year of production.
Live‑Streaming Message Business : The system distinguishes several message types – point‑to‑point (unicast), room‑wide (multicast), and broadcast – and further categorises them by business scenario (gift, public chat, PK, notifications). Message priority is essential; for example, gift messages outrank public chat, and high‑value gifts outrank low‑value ones. Prioritisation prevents UI stutter when a hot room generates more than 15–20 messages per second.
Message Technology Points :
Message architecture model (illustrated with diagrams).
Short polling vs. long connection.
Short Polling :
Client polls the server every 2 seconds with roomId and timestamp .
Server returns a limited number of messages (e.g., 10‑15) and the latest timestamp for the next request.
Polling interval is tuned according to room size (e.g., 1.5 s for <100 users, 2 s for >1000 users, etc.).
Key concerns: timestamp validation, duplicate‑message detection, and handling of massive message bursts.
Long Connection :
Clients obtain a TCP long‑connection IP via HTTP, then establish a full‑duplex encrypted channel.
Keep‑alive (keeplive) and smart heartbeat mechanisms detect client crashes or network breaks.
Connection management includes load‑balanced clusters, hot‑update plug‑in architecture, and graceful reconnection.
Message Storage with Redis SortedSet :
Four SortedSets per room: live::roomId::gift , live::roomId::chat , live::roomId::notify , live::roomId::pk .
Score = message generation timestamp; value = serialized JSON.
SortedSet chosen for low insertion/query complexity, small memory footprint, and easy persistence.
Message Distribution :
Unicast, multicast, and broadcast are all routed through the IM long‑connection server.
Business servers handle event logic (e.g., gift deduction, content moderation) before invoking IM APIs.
Clients receive a unified message format regardless of the underlying transport.
Message Compression & Block Messages :
Protobuf reduces payload size by ~43 %.
Block messages merge multiple messages within a 1‑2 s window, sharing a common header to lower bandwidth and avoid message storms.
Message Discard Strategy :
When traffic exceeds capacity, low‑priority messages are dropped based on predefined levels.
Messages carry creation and send timestamps; overly old messages are discarded.
Incremental (gain) messages are used to correct state after losses.
Conclusion : The live‑streaming IM system continuously evolves to meet growing traffic and feature demands. By combining short polling, long connections, Redis SortedSet storage, priority handling, and compression, Vivo achieves a scalable, reliable messaging backbone for its live‑streaming services.
vivo Internet Technology
Sharing practical vivo Internet technology insights and salon events, plus the latest industry news and hot conferences.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.