How WebRTC Jitter Buffer Manages Delay for Smooth Video Playback
This article explains the concept, components, and algorithms of WebRTC's adaptive jitter buffer, detailing how it calculates network, decode, and render delays to ensure smooth video playback while balancing latency and packet loss.
0. Introduction
Jitter buffer, also called jitter and buffer, consists of delay and buffer management. It works on the receiver side, usually in the player, to ensure smooth playback. There are static and adaptive jitter buffers; WebRTC uses an adaptive jitter buffer, and this article focuses on its delay calculation.
1. Basic Idea
To guarantee smooth playback, received frames should be played back at intervals matching the original capture interval. Because of network, decode, and render delays, playing a frame immediately can cause fast‑forward or stutter. WebRTC adds a dynamic delay before decoding a complete frame, allowing frames arriving within this delay to be played without stutter.
Assume packets A, B, C, D are sent every 30 ms with network delays of 10 ms, 30 ms, 10 ms, and 10 ms, arriving at 40 ms, 90 ms, 100 ms, and 130 ms respectively. The resulting intervals are 50 ms, 10 ms, and 30 ms.
To achieve smooth playback, a 20 ms buffer delay is added, resulting in playback times of 60 ms, 90 ms, 120 ms, and 150 ms, preserving the 30 ms interval.
If the buffer size is limited to 10 ms, playback times become 50 ms, 80 ms, 110 ms, and 140 ms, causing packet B to be dropped to maintain the 30 ms interval.
The jitter buffer size is critical: too small leads to packet loss, too large increases latency. Ideally, the sum of network delay and buffer delay equals the total delay. Most jitter buffers set their size to the measured maximum network delay.
2. Basic Process
The jitter buffer consists of a buffer part and a jitter part.
The buffer includes PacketBuffer , RtpFrameReferenceFinder , and FrameBuffer . PacketBuffer ensures frame completeness, RtpFrameReferenceFinder assigns reference frames, and FrameBuffer maintains continuity and decodability.
The jitter part handles delay calculation: FrameBuffer sets the expected receive time when inserting a frame, FindNextFrame sets the render time, and GetNextFrame updates the network jitter.
2.1 Common Classes
Key classes for video jitter buffer jitter calculation:
RtpVideoStreamReceiver: receives RTP data.
VideoReceiveStream: drives data, inserts complete frames into FrameBuffer, and triggers decoding.
FrameBuffer: ensures frame continuity and decodability, storing undecoded and decoded frames.
VCMJitterEstimator: computes jitter value.
VCMTiming: calculates current delay for rendering.
VCMCodecTimer: records decode latency.
2.2 Basic Flow Analysis
1. RtpVideoStreamReceiver receives RTP packets, unwraps them into VCMPacket, and inserts them into PacketBuffer. 2. PacketBuffer assembles frames; when a complete frame is found, it calls RtpVideoStreamReceiver::OnAssembledFrame. 3. RtpVideoStreamReceiver uses RtpFrameReferenceFinder to set reference frames, then calls OnCompleteFrame on VideoReceiveStream. 4. VideoReceiveStream inserts the frame into FrameBuffer, updating reference and decode completeness. 5. When decode time arrives, a frame is taken from FrameBuffer, decoded, and rendered after the render delay.
2.3 Post‑Insertion Processing in FrameBuffer
The main operations on FrameBuffer are reading and writing.
Reading:
VideoReceiveStream start launches a decode thread, which calls FrameBuffer::FindNextFrame to get a decodable frame and a wait time.
After waiting, GetNextFrame retrieves the frame for decoding.
GetNextFrame updates the delay based on actual decode time.
Writing:
When a complete frame with reference information is ready, VideoReceiveStream::OnCompleteFrame calls FrameBuffer::InsertFrame.
The insert operation checks frame validity, buffer fullness, updates reference counts, sets render time for non‑retransmitted frames, propagates continuity, and triggers decode tasks.
3. Delay Classification
Three main delay types exist: network jitter delay, decode delay, and fixed render delay (typically 10 ms).
3.1 RTP Frame Timeline
Key timestamps:
now_ms : current time (ms).
expect_decode_time : expected decode start time.
actual_decode_time : actual decode start time.
decode_finish_time : decode completion time.
render_time : render time.
wait_time : waiting time until expected decode.
current_delay : current delay.
target_delay : optimal target delay.
frame_delay : delay introduced by the frame.
render_delay : fixed 10 ms render delay.
Formulas:
expect_decode_time = render_time_ms - decode_delay_ms - render_delay_ms_
frame_delay = actual_decode_time - expect_decode_time
current_delay = max(current_delay + frame_delay, target_delay)
wait_time = render_time - now_ms - decode_delay - render_delay
decode_delay = decode_finish_time - expect_decode_time
3.2 wait_time Calculation
wait_time represents the minimum waiting period from frame reception to decoding.
<span>int64_t</span> VCMTiming::MaxWaitingTime(<span>int64_t</span> render_time_ms, <span>int64_t</span> now_ms) <span>const</span> {<br/> <span>const</span> <span>int64_t</span> max_wait_time_ms =<br/> render_time_ms - now_ms - RequiredDecodeTimeMs() - render_delay_ms_;<br/> <span>return</span> max_wait_time_ms;<br/>}3.3 Render Time Calculation
The desired render start time is computed as:
RenderTime = estimated_complete_time_ms (expected receive time) + actual_delay (current delay).
3.4 Jitter Calculation
3.4.1 Definition
Jitter is the variation in packet arrival intervals caused by network delay fluctuations. It can be expressed as Ji = Si - Ri, where Si is the sending interval and Ri is the receiving interval.
Two‑step calculation:
Compute inter‑frame delay = (receive time difference) - (send time difference) → VCMInterFrameDelay.
Apply a Kalman filter on VCMInterFrameDelay to obtain the optimal jitter value → VCMJitterEstimator.
3.4.2 Jitter Model
设T(i) 为发送时间,t(i) 为接收时间,d(i)为第i帧的接收时间之差减去发送时间之差。d(i)就是相邻两帧产生的延时,W(i)是误差。<br/>d(i) = t(i) - t(i-1) - (T(i) - T(i-1)) +W(i)<br/> = (t(i) - T(i)) - (t(i-1) - T(i-1)) + W(i)<br/> = L(i)/C(i) - L(i-1)/C(i-1) + w(i) (此处假设C(i) = C(i-1) 即网路传输速率不变)<br/> = dL(i) / C(i) + w(i)Thus jitter = d(i) = dL(i) / C(i) + w(i).
3.5 Expected Receive Time (estimated_complete_time_ms)
TimestampExtrapolator, a Kalman filter, computes the expected receive time from RTP timestamps and actual receive times.
Formula: T(k) = startMs + (timestampDiff - w[1]) / w[0] + 0.5, where timestampDiff = R(k) - firstTimestamp.
3.6 Actual Delay (actual_delay)
Actual delay is derived from current_delay_ms_, which is constrained between min_playout_delay_ms_ and max_playout_delay_ms_.
3.7 Current Delay (current_delay_ms_)
<span>void</span> VCMTiming::UpdateCurrentDelay(<span>int64_t</span> render_time_ms, <span>int64_t</span> actual_decode_time_ms) {<br/> rtc::<span>CritScope</span> cs(&crit_sect_);<br/> <span>uint32_t</span> target_delay_ms = TargetDelayInternal();<br/> <span>int64_t</span> delayed_ms = actual_decode_time_ms - (render_time_ms - RequiredDecodeTimeMs() - render_delay_ms_);<br/> <span>if</span> (delayed_ms < 0) {<br/> <span>return</span>;<br/> }<br/> <span>if</span> (current_delay_ms_ + delayed_ms <= target_delay_ms) {<br/> current_delay_ms_ += delayed_ms;<br/> } <span>else</span> {<br/> current_delay_ms_ = target_delay_ms;<br/> }<br/>}current_delay_ms_ adds the positive delay difference to the previous current delay, clamped by target_delay_ms.
3.8 Target Delay (target_delay)
target_delay = max(min_playout_delay_ms_, jitter_delay_ms_ + RequiredDecodeTimeMs() + render_delay_ms_)target_delay combines network jitter, decode delay, and fixed render delay.
3.9 Decode Delay (decode_delay)
<span>int</span> VCMTiming::RequiredDecodeTimeMs() <span>const</span> {<br/> <span>const</span> <span>int</span> decode_time_ms = codec_timer_->RequiredDecodeTimeMs();<br/> assert(decode_time_ms >= 0);<br/> <span>return</span> decode_time_ms;<br/>}It uses the 95th percentile of the last 10,000 decode time samples.
4. Observing Metric Changes
The various delay values can be inspected in chrome://webrtc-internals:
googDecodeMs – latest decode time.
googMaxDecodeMs – maximum decode time (95th percentile).
googRenderDelayMs – render delay (10 ms).
googJitterBufferMs – network jitter delay.
googMinPlayoutDelayMs – minimum playout delay for AV sync.
googTargetDelayMs – target delay.
googCurrentDelayMs – current delay used for RenderTime.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Douyu Streaming
Official account of Douyu Streaming Development Department, sharing audio and video technology best practices.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
