DeepQueueNet in Practice: Quickly Achieve High‑Precision Network Simulation

This article walks through using DeepQueueNet—a deep‑learning‑enhanced network performance estimator—to set up a device model, train the PyTorch version, configure a fattree16 topology, and run multi‑GPU simulations that deliver minute‑level, packet‑accurate results in as little as 1 minute 27 seconds.

Network Intelligence Research Center (NIRC)
Network Intelligence Research Center (NIRC)
Network Intelligence Research Center (NIRC)
DeepQueueNet in Practice: Quickly Achieve High‑Precision Network Simulation

DeepQueueNet (DQN) is an innovative network performance evaluation tool that combines deep learning with discrete‑event simulation to provide scalable, packet‑level accurate estimates; the underlying paper was accepted at ACM SIGCOMM 2022.

DeepQueueNet architecture diagram
DeepQueueNet architecture diagram

Preparation & Model Training

Hardware: at least one GPU (tested on Tesla P100‑12 GB) and ~200 GB disk space (pre‑processed data ≈180 GB).

Dataset: download from Dropbox, set up the Conda environment with conda env create -f env.yml, unzip data, run Dataset.py for preprocessing.

Training: use the PyTorch version, execute python train1000.py (set max_epoch=20); data processing takes ~1 h, model training ~15 h.

The trained device model’s probability density and cumulative distribution functions on the test set closely match the true distribution, demonstrating strong fitting ability.

DeepQueueNet model architecture
DeepQueueNet model architecture

Network Simulation

We configure a fattree16 topology by defining a two‑dimensional array T in fattree_model.py, where T(i,j) specifies the port connecting switch i to switch j (‑1 indicates no link). Example: T(0,4)=0 means switch 0 connects to switch 4 via port 0.

Fattree16 topology diagram
Fattree16 topology diagram

Multi‑GPU simulation is supported; device ranges are assigned per GPU (e.g., four GPUs each handling a partition of the topology as shown in the diagram).

Device allocation for 4‑GPU simulation
Device allocation for 4‑GPU simulation

Simulation packets are represented as an 8‑tuple (sequence number, send timestamp, byte size, priority, source, destination port, destination, path); the eighth element etime is used only for accuracy verification.

Using the provided validation dataset (30 s of traffic generated by three random processes), we run the simulation with the trained model. The predicted delays are virtually indistinguishable from the ground‑truth delays.

Final simulation results
Final simulation results

Performance on a Tesla P100‑12 GB GPU:

1 GPU: 5 min 12 s

2 GPU: 2 min 45 s

4 GPU: 1 min 27 s

References: https://www.dropbox.com/s/q56sx4hxe93n4g5/DeepQueueNet-dataset.zip?dl=0 (training and simulation dataset) and https://github.com/HUAWEI-Theory-Lab/deepqueuenet/tree/pytorch (official repository).

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

deep learningPyTorchnetwork simulationmulti‑GPUDeepQueueNetfattree topology
Network Intelligence Research Center (NIRC)
Written by

Network Intelligence Research Center (NIRC)

NIRC is based on the National Key Laboratory of Network and Switching Technology at Beijing University of Posts and Telecommunications. It has built a technology matrix across four AI domains—intelligent cloud networking, natural language processing, computer vision, and machine learning systems—dedicated to solving real‑world problems, creating top‑tier systems, publishing high‑impact papers, and contributing significantly to the rapid advancement of China's network technology.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.