How Segment Routing Transforms MPLS: From Theory to Data Center Applications
This article explains the evolution of Segment Routing, its core concepts such as Segments, SIDs and Segment Lists, and how SR‑MPLS and SRv6 improve MPLS forwarding, TE, and data‑center overlay networking with UDP and load‑balancing techniques.
Segment Routing
MPLS traditionally relies on LDP to map IGP routes to MPLS labels, but LDP lacks state and TE capabilities, requiring additional RSVP‑TE control. Segment Routing (SR) was introduced to overcome these limitations.
The source‑routing idea first appeared in 1977, and Cisco’s SR‑MPLS implementation in 2013 realized it in practice.
In SR, the forwarding path is determined at the source by dividing the network into Segments identified by a Segment Identifier (SID). An ordered Segment List is inserted at the ingress node, and intermediate nodes forward packets according to this list.
The network topology is split into Segments, each uniquely identified by a SID.
The ingress node inserts an ordered Segment List; downstream nodes forward based on the list.
Key terms: Segment, Segment ID (SID), Segment List, SR Domain.
SR supports both MPLS and IPv6 data planes.
SR-MPLS : uses an MPLS label as the SID.
SRv6 : uses a special IPv6 address as the SID.
SR-MPLS
The data plane remains MPLS, while the control plane replaces LDP with an IGP extended with SR attributes, providing source‑address label forwarding via centralized RSVP control.
Basic forwarding principle
Nodes and Adjacency are defined as Segments, each assigned a SID. A Segment List (Node SID + Adjacency SID) describes an end‑to‑end path, encapsulated in the MPLS label stack at the ingress node.
Node SIDs are manually configured and advertised globally; Adjacency SIDs are local but also advertised within the IGP.
SR-MPLS BE forwarding
Best‑effort (BE) uses Node SID with IGP SPF to compute the shortest path, creating an SR LSP that behaves like a regular MPLS LSP, supporting push, swap, pop, PHP, and QoS.
Manual configuration of Node SID and SRGB, announced via IGP.
Label allocation: label = SRGB start + Node SID (or next‑hop SRGB + Node SID).
Path calculation using IGP SPF to generate forwarding entries.
SR-MPLS TE forwarding
SR‑MPLS TE tunnels combine MPLS TE attributes with SR, supporting BFD fault detection. SDN controllers can build these tunnels automatically using BGP‑LS, PCEP, NETCONF, and policy‑driven path computation.
Manual IGP SR configuration.
Topology and label information reported to the controller via BGP‑LS.
Path computation via PCEP.
Tunnel attributes and LSP information distributed by NETCONF and PCEP.
PE routers create the SR‑TE tunnel.
Two implementation modes exist: strict explicit paths using only Adjacency Segments, and loose explicit paths combining Adjacency and Node Segments.
SR-MPLS in data‑center scenarios
Historically limited to carrier backbones, SR‑MPLS is now explored in data centers thanks to programmable networking (DPDK, P4, DPU) that enable label‑forwarding capabilities.
MPLS over UDP
Used for overlay networks; see RFC 7150 and RFC 3032. The packet format includes a source port (entropy), destination port (fixed to 6635 for MPLS‑in‑UDP), UDP length, checksum, MPLS label stack, and payload.
SR‑MPLS over UDP
Applied to multi‑data‑center overlays, enabling segment routing, inter‑data‑center routing, tunneling across non‑SR cores, and mixed‑mode IP networks.
Load balancing based on UDP srcPort
Setting the UDP source port to a hash of the original packet improves load balancing at the network layer (ECMP), NIC bonding layer, and CPU core layer in DPDK environments.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
