DeepQueueNet: Scalable Network Performance Estimation with Packet‑Level Visibility
DeepQueueNet combines discrete‑event and continuous simulation with deep neural networks to deliver highly accurate, generalizable, and GPU‑scalable network performance estimates at packet‑level granularity, outperforming existing DNN‑based estimators across diverse topologies and traffic scenarios.
Introduction
Network simulators are essential tools for operators, supporting tasks such as capacity planning, topology design, and parameter tuning. This article presents DeepQueueNet, a network performance estimation tool that offers scalable and generalized evaluation with packet‑level visibility, built on reliable queuing theory and deep neural networks (DNN).
Research Background
Traditional discrete‑event simulators struggle to scale to modern network sizes. Recent deep‑learning‑based approaches improve scalability but suffer from poor visibility of simulation results and limited applicability across scenarios. DeepQueueNet addresses these gaps by integrating continuous simulation with discrete‑event simulation, achieving high scalability while preserving packet‑level insight.
Design of DeepQueueNet
The design starts with a queuing‑theory model of modern networks, isolates mathematically hard or computationally expensive components, and replaces them with DNNs. The DNN is applied only to the device‑local traffic‑management (TM) mechanism within an end‑to‑end performance estimator (EPE). The device model comprises two sub‑models: a packet‑level forwarding model (PFM) that describes forwarding behavior via tensor multiplication of routing tables, and a packet‑level TM model (PTM) that predicts per‑packet delay.
Architecture
DeepQueueNet consists of five core components:
DUtil : generates trained device models.
DLib : stores and indexes device models for switches, routers, and links.
TGUtil : creates traffic generators (TGen) from user specifications.
SInit : parses user input and configures the simulation.
SRun : executes the simulation.
The workflow mirrors existing DES pipelines: prepare simulation settings (topology, device configuration, traffic generator), run the simulation, collect packet traces, and apply arbitrary metrics to the results.
Evaluation
Accuracy : Compared with state‑of‑the‑art DNN‑based EPEs, DeepQueueNet achieves superior average and 99th‑percentile round‑trip time (RTT) across all test scenarios.
Generalization : Extensive experiments show that DeepQueueNet maintains high estimation accuracy of normalized Wasserstein distance when topology, TM configuration, or traffic‑generation model changes, without requiring model retraining.
Scalability : Deployed on a 4‑GPU cluster, DeepQueueNet demonstrates near‑linear speed‑up as the number of GPUs increases.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Network Intelligence Research Center (NIRC)
NIRC is based on the National Key Laboratory of Network and Switching Technology at Beijing University of Posts and Telecommunications. It has built a technology matrix across four AI domains—intelligent cloud networking, natural language processing, computer vision, and machine learning systems—dedicated to solving real‑world problems, creating top‑tier systems, publishing high‑impact papers, and contributing significantly to the rapid advancement of China's network technology.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
