Backend Development 24 min read

How Douyu Built Its Own High‑Performance P2P Live‑Streaming System

Douyu’s senior streaming engineer Zhou Sha details the company’s self‑developed P2P solution, covering its background, architecture, key technologies such as sub‑streaming, WebRTC, data slicing, SDK design, and the strategies used to boost sharing rates and future roadmap.

Douyu Streaming
Douyu Streaming
Douyu Streaming
How Douyu Built Its Own High‑Performance P2P Live‑Streaming System

1. Background of Douyu P2P

Douyu is a NASDAQ‑listed live‑streaming company that spends over 150 million CNY per quarter on bandwidth. Before 2019 it used commercial P2P solutions, which incurred high service fees, prompting the launch of an in‑house P2P project in February 2019. After gray‑testing and performance tuning, large‑scale deployment began after the National Day holiday, now handling 70‑80 % of traffic.

2. Douyu P2P Architecture

The architecture consists of a server side and a client‑SDK side. The server side includes three components:

Shard cluster (P2P source): pulls full FLV streams from a commercial source, slices them into fragments, stores them in memory for CDN back‑pull, and provides a 302 scheduling service for CDN fragment placement, using long TCP connections to CDN L2 nodes.

P2P scheduling system: manages resources and user admission.

Tracker cluster: tracks users in the same stream, facilitates P2P connections and signaling.

When a user enters a live room, the SDK calls the

get_link

API, receives the stream URL and P2P configuration, contacts the scheduling system, receives a Tracker IP, opens a WebSocket connection, and then receives a list of peers. The client downloads part of the data from CDN and the rest from peers.

2.1 P2P Scheduling System

The scheduling system contains a configuration center (operational platform) for per‑room P2P enablement and source selection, a channel management service that dynamically allocates trackers based on audience size, and a monitoring service that removes unhealthy instances.

3. Key Technologies

3.1 Sub‑stream Mode

Video fragments are divided into multiple logical sub‑streams (e.g., three sub‑streams where sub‑stream 1 handles fragments 1, 4, 7, etc.). A user receives some fragments from CDN and the rest from peers assigned to other sub‑streams, achieving a theoretical sharing ratio of 66.7 % for three sub‑streams. Increasing the number of sub‑streams raises the ratio but also adds connection overhead; Douyu currently uses six sub‑streams.

An “emergency window” ensures that any fragment not received from peers within four seconds is fetched from CDN to avoid playback stalls.

3.2 WebRTC

Douyu replaced the original Flash‑based P2P with WebRTC to enable data sharing across H5, PC, Android, and iOS. The original WebRTC library (~1 M lines) was trimmed to a 30‑40 k line “DyRTC” component, keeping only DataChannel, DTLS, and P2P modules, which reduced SDK size and crash rates.

3.3 Data Slicing

Live streams use HTTP‑FLV. The server creates time‑based slices (Packages) each containing a header and multiple 1 KB‑scale chunks. The smallest distribution unit is a chunk, improving sharing efficiency. Two slicing approaches exist: an independent slicing service or embedding slice IDs in SEI frames during CDN transcoding.

3.4 P2P SDK

The SDK is the most code‑intensive part, focusing on crash‑resilience. All network protocols (HTTP/HTTPS, WebSocket, STUN, RTMP, timers) are self‑implemented. The SDK runs most logic on a single thread to avoid lock contention, resulting in negligible P2P‑related crashes among the top‑10 crash categories.

3.5 P2P Strategies

Sharing strategy groups users by ISP and region. NAT‑type based penetration rules avoid using symmetric NAT users as seeds and prevent port‑restricted users from being assigned to symmetric peers. 4G users download only; they do not upload.

Seed scoring evaluates online duration, NAT‑hole punching success, data stability, and device type to prioritize high‑quality peers.

Additional optimizations include “main‑road sub‑stream” (70 % of users in a sub‑stream fetch data from the remaining 30 % peers), Trickle ICE for faster connection establishment, coarse‑grained regional grouping (3‑4 large zones), and IPv6 support.

4. Summary & Future Plans

Douyu’s self‑built P2P now supports large‑scale live streaming with overall sharing ratios above 75 % (over 80 % for major rooms). Future work includes exploring QUIC‑like protocols for native connections, reducing WebRTC encryption overhead, extending P2P to video‑on‑demand, supporting 4K/8K streams with differentiated strategies, and scaling the solution to overseas markets with many ISPs.

live streamingCDNP2PWebRTCVideo ArchitectureData Slicing
Douyu Streaming
Written by

Douyu Streaming

Official account of Douyu Streaming Development Department, sharing audio and video technology best practices.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.