INCS: A DRL‑Based Intent‑Driven Network‑Wide Configuration Synthesis Framework
The article presents INCS, a novel framework that combines graph neural networks and deep reinforcement learning to achieve protocol‑agnostic, millisecond‑level, globally optimized network configuration synthesis, addressing scalability, protocol dependence, and lack of optimization in traditional SMT‑based methods, and demonstrates its superior performance on large‑scale topologies.
Research Background
As network size grows exponentially, traditional template‑based or SMT‑solver configuration synthesizers suffer from poor scalability, long synthesis times, and inability to handle soft‑constraint optimization.
Limitations of Existing Methods
Scalability bottleneck : SMT‑based tools such as NetComplete experience exponential time growth and often timeout on topologies with more than a few dozen routers.
Poor generality : Most synthesizers are tied to a specific protocol, making cross‑protocol reasoning difficult.
Lack of optimization : They focus on hard specifications and abort when constraints are unsatisfiable, providing no sub‑optimal solutions.
INCS Solution Overview
INCS (Intent‑driven Network‑wide Configuration Synthesis) introduces a learning‑based framework that uses a graph neural network (GAT) to process non‑Euclidean topology data and a deep deterministic policy gradient (DDPG) reinforcement learner to iteratively refine the configuration toward global optimality, while remaining protocol‑agnostic and responding within milliseconds.
Overall Architecture
The end‑to‑end synthesizer takes three inputs: the network topology, administrator intent (hard and soft constraints), and a configuration sketch with undetermined parameters. Its core consists of three modules:
Fact‑graph constructor : Converts heterogeneous inputs into a unified embedding graph using a Datalog‑style fact representation.
GNN‑based predictor : Applies a Graph Attention Network (GAT) to the fact graph, producing an initial probability distribution for each unknown parameter.
DRL‑based optimizer : Employs DDPG to fine‑tune the predictor’s output, satisfying both hard and soft constraints.
Fact‑Graph Constructor Details
Network entities (routers, neighbors, link weights) and hard constraints are encoded as facts; a graph embedding function F_emb() maps these facts to nodes, with adjacency edges representing parameter relationships.
GNN‑Based Predictor Details
The predictor receives the embedded fact graph, stacks L graph‑attention layers with multi‑head attention, and computes attention scores that highlight topology features most relevant to configuration decisions. After a Softmax layer, it outputs a probability distribution O_j for each unknown parameter. Training uses supervised learning on a dataset generated by NetComplete, optimizing negative log‑likelihood loss.
DRL‑Based Optimizer Details
The optimizer treats the predictor’s probability distribution P_t as the state space. Actions combine a deterministic policy with Ornstein‑Uhlenbeck exploration noise. Rewards comprise a basic term, an optimization‑goal term, a baseline term, and an exploration term. DDPG (an actor‑critic, model‑free algorithm) iteratively refines the configuration to satisfy all constraints.
Performance Analysis
Specification consistency : INCS achieves 100 % compliance across all test topologies (small, medium, large) and five hard‑constraint types, and remains the most consistent under stress tests with unsatisfiable constraints.
Synthesis efficiency : On large topologies with many complex constraints, the baseline NetComplete times out (>25 min) while INCS completes in 53.69 s, a 27.9× speedup. Although graph construction dominates runtime on small networks, inference time stays minimal on large scales.
Conclusion and Future Work
INCS’s contributions are: (1) first integration of DRL into configuration synthesis, solving large‑state‑space search; (2) drastic reduction of synthesis time while guaranteeing 100 % intent satisfaction; (3) support for multiple IETF protocols and flexible handling of soft/hard constraints, enabling intent‑driven networking. Future directions include exploring more complex multi‑protocol joint optimization and scaling to ultra‑large networks.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Network Intelligence Research Center (NIRC)
NIRC is based on the National Key Laboratory of Network and Switching Technology at Beijing University of Posts and Telecommunications. It has built a technology matrix across four AI domains—intelligent cloud networking, natural language processing, computer vision, and machine learning systems—dedicated to solving real‑world problems, creating top‑tier systems, publishing high‑impact papers, and contributing significantly to the rapid advancement of China's network technology.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
