Mastering Network Congestion Control: Bandwidth Estimation, Bitrate Allocation, and Paced Sending
This article explains how network congestion affects audio‑video applications and introduces NetEase Cloud Communication’s QoS strategies—including real‑time bandwidth estimation algorithms, adaptive bitrate allocation, and paced sending techniques—to achieve low latency, high stability, and optimal user experience across Wi‑Fi, cellular, and wired networks.
Why Network QoS Matters
With the rapid growth of mobile networks, video conferencing and interactive live streaming have exploded, creating high demand for audio‑video experiences that are high‑quality, low‑latency, and ultra‑smooth. Network QoS (Quality of Service) provides the fundamental guarantee for data‑transfer channels.
Common Audio‑Video Network Issues
Typical problems include congestion, delay jitter, and packet loss. Improper handling of congestion can cause increasing latency, severe packet loss, and ultimately long playback delays and stuttering.
What Is Network Congestion and How to Control It
Network congestion occurs when usage of resources (bandwidth, buffer space, CPU) exceeds capacity, degrading performance. Congestion control aims to limit the sender’s rate to prevent congestion and to eliminate congestion once it appears, thereby improving throughput.
Just as traffic jams slow vehicles on a road, congestion‑control strategies act like traffic‑management measures.
Types of Networks and Congestion Behavior
Networks can be roughly divided into two categories based on buffer depth:
Shallow‑buffer (shallow buffers) networks: Little or no buffering at nodes; congestion manifests mainly as packet loss with little delay increase.
Deep‑buffer (depth buffers) networks: Large node buffers; congestion first shows as rising delay, and only when buffers are exhausted does packet loss occur.
Wi‑Fi, cellular, and wired networks each have specific causes of limited bandwidth, such as signal attenuation, interference, device density, or shared bandwidth.
Congestion‑Control Strategies Overview
The core strategies include real‑time bandwidth estimation, bitrate allocation, and paced sending.
Fused Bandwidth‑Estimation Algorithm
The algorithm combines a delay‑based method and a loss‑based method, using ACK bitrate as a reference to compute a bandwidth estimate.
Sender side: Smoothly sends data while the receiver periodically feeds back packet arrival status and timestamps.
Receiver feedback processing: The sender inputs packet arrival times and sizes, calculates the received bitrate over a short window (hundreds of ms), and applies a Bayesian estimator to obtain a stable estimate.
Delay‑based component: Groups packets, measures inter‑group delay variation, and feeds it to a trendline algorithm, classifying network state as overuse, normal, or underuse.
Loss‑based component: Computes packet‑loss rate from feedback, applies a filter to determine loss trend states (LossIncr, LossHold, LossDecr).
Rate control: Based on network load, loss trend, and ACK bitrate, three rate‑control states (RC Decr, RC Hold, RC Incr) produce an RC estimate, which is combined with loss information to yield the final bandwidth estimate.
If loss rate is below a low threshold, the final estimate is θ × RC (θ > 1.0, adjusted by RTT). If loss is high and in LossIncr for a sustained period, the ACK bitrate becomes the final estimate; otherwise, the RC estimate is used.
For deep‑buffer networks, the delay‑based algorithm can quickly detect congestion and provide accurate estimates. For shallow‑buffer networks, delay changes are minimal, so loss‑based information must be incorporated.
Bitrate Allocation
The upper limit of the bandwidth estimate is set to the video’s maximum recommended bitrate (derived from resolution, frame rate, etc.). When no loss occurs, the entire estimated bandwidth is allocated to encoding. When loss is present, Forward Error Correction (FEC) and retransmission (NACK) are added, so the sum of FEC + retransmission + encoding bitrate must not exceed the bandwidth estimate, otherwise congestion worsens.
A dynamic upper‑limit strategy monitors the ratio of total sent bitrate to encoding bitrate, smooths it, and multiplies the result by the recommended maximum to obtain a new upper bound, updating with a fast‑rise, slow‑fall policy.
Paced Sending
Paced sending uses a token‑bucket rate‑limiter to control the sending speed of all RTP packets (media, FEC, retransmission). Packets are placed in a priority queue; a timer updates a budget based on the bandwidth estimate and a pacer coefficient. When the budget is positive, packets are sent and the budget is consumed; when it reaches zero, sending pauses.
The pacer coefficient determines smoothing strength: a coefficient of 1.0 means strict adherence to the bandwidth estimate, minimizing burst impact and improving utilization, but may add a small pacer delay. Variations in I‑frame size, scene changes, and extra FEC/retransmission traffic cause queue rate fluctuations, so the coefficient must balance pacer delay, congestion delay, bandwidth utilization, and overall QoE.
Dynamic Pacer Coefficient Strategy
When bandwidth is limited, the smoothing coefficient is set smaller and adjusted based on queue delay; when bandwidth is abundant, the coefficient is larger, with the current bandwidth estimate and recent congestion observations used to decide if bandwidth is constrained.
Conclusion
The article presented NetEase Cloud Communication’s QoS congestion‑control strategy, covering bandwidth‑estimation algorithms, bitrate allocation, and paced‑sending techniques. By balancing QoE metrics such as latency, stutter, and bandwidth utilization, these methods enable low‑delay, low‑stutter, high‑utilization audio‑video experiences.
NetEase Smart Enterprise Tech+
Get cutting-edge insights from NetEase's CTO, access the most valuable tech knowledge, and learn NetEase's latest best practices. NetEase Smart Enterprise Tech+ helps you grow from a thinker into a tech expert.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
