How NetEase’s Low‑Latency Streaming Player Achieves Sub‑200 ms Startup

This article explains the design, architecture, integration steps, and performance optimizations of NetEase Cloud‑Signal’s low‑latency streaming player built on WebRTC, covering its three‑module framework, FFmpeg and non‑FFmpeg integration, first‑frame acceleration, and resilience techniques.

NetEase Smart Enterprise Tech+
NetEase Smart Enterprise Tech+
NetEase Smart Enterprise Tech+
How NetEase’s Low‑Latency Streaming Player Achieves Sub‑200 ms Startup

Introduction – In May 2022, NetEase Zhiji “Yi+” open‑source program released its second project, the NetEase Cloud‑Signal Low‑Latency Streaming (LLS) open‑source code on GitHub. The article shares the design and practice of the WebRTC‑based low‑latency player.

Low‑Latency Player Overview – LLS builds on NetEase Cloud‑Signal’s standard live streaming, leveraging the self‑developed global real‑time transport network WE‑CAN to provide sub‑second latency, instant start‑up, and low‑jitter playback while remaining compatible with standard streaming pipelines.

The Low‑Latency Player SDK is a transport‑layer SDK developed with WebRTC, offering signaling, media connection, audio/video reception, and weak‑network mitigation capabilities, delivering strong QoE.

Player Framework

The player consists of three modules:

WebRTC – Handles media connection, data reception, packet reordering, frame assembly, and weak‑network resilience.

RtdEngine – A wrapper around WebRTC providing APIs, engine creation, signaling, and media callbacks.

FFmpeg plug‑in – Wraps RtdEngine APIs into an FFmpeg plug‑in (ff_rtd_demuxer) extending AVInputFormat.

Data Flow

After receiving RTP packets from WE‑CAN, the Transport module forwards them to NetEQ and JitterBuffer for sorting, framing, and weak‑network handling. Processed data is then passed to RtdEngine, and the player reads audio/video via the FFmpeg plug‑in RtdDemuxer. Video is delivered as H264/H265, audio as PCM.

Integration Guide

Two integration paths are provided to reduce effort:

FFmpeg‑Based Player Integration

The SDK adds a custom AVInputFormat (ff_rtd_demuxer) implementing rtd_probe, rtd_read_header, rtd_read_packet, and rtd_read_close.

Steps (using ffplay from FFmpeg 4.3):

Copy rtd_dec.c into ffmpeg/libavformat.

Edit ffmpeg/libavformat/Makefile to compile rtd_dec.c.

Register the new AVInputFormat in ffmpeg/libavformat/allformats.c.

Add include and library paths in the FFmpeg configure command:

./configure --enable-shared --prefix="xxx/xxx" --extra-cflags=-I/xxx/rtd/include --extra-ldflags=-L/xxx/rtd/libs --extra-libs=-lrtd

Then build: make && make install Place the compiled FFmpeg libraries together with rtd.dll in the player; using a stream URL prefixed with nertc:// enables low‑latency playback without further changes.

Non‑FFmpeg Player Integration

Directly call APIs defined in rtd_api.h. The integration flow is illustrated in the accompanying diagram.

Key Metric Optimizations

To improve first‑frame latency and weak‑network resilience, the following optimizations were applied:

First‑Frame Optimizations

Server‑side GOP caching – The server caches the nearest GOP so that when a client subscribes, the key frame can be sent immediately, reducing start‑up delay.

Fast retransmission of lost packets before the first key frame – The server signals the sequence number of the first packet, enabling the client to detect loss early and trigger rapid retransmission.

Reduced waiting time before first frame output – The jitter buffer’s initial wait is shortened, allowing video frames to be decoded and rendered sooner.

Resilience Optimizations

NACK enhancement – Enable audio NACK and adjust request intervals based on real‑time RTT for more efficient retransmission.

Dynamic jitter buffer – Continuously monitor network quality (packet loss, RTT, retransmission delay, jitter) and adapt the jitter buffer size to balance latency and smoothness.

FEC/RED redundancy – Use Forward Error Correction and RED to send redundant packets, improving loss recovery when RTT is high.

Performance Demo

Using OBS for streaming with a jitter buffer set to 500 ms, the end‑to‑end latency observed is around 900 ms.

Open‑Source Release

The Low‑Latency Player SDK is now open‑source, with future versions planned to support H265, RS‑FEC, and further improvements to first‑frame latency, end‑to‑end delay, and weak‑network robustness.

Upcoming open‑source low‑latency streaming SDK will provide a full‑chain solution for developers.

Source code:

https://github.com/GrowthEase/LLS-Player

https://gitee.com/GrowthEase/lls-player

Other open‑source projects:

NetEase Meeting: https://github.com/GrowthEase/NetEase_Meeting

https://gitee.com/GrowthEase/NetEase_Meeting

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

SDKffmpeglow-latency streamingReal-time communicationWebRTCmedia player
NetEase Smart Enterprise Tech+
Written by

NetEase Smart Enterprise Tech+

Get cutting-edge insights from NetEase's CTO, access the most valuable tech knowledge, and learn NetEase's latest best practices. NetEase Smart Enterprise Tech+ helps you grow from a thinker into a tech expert.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.