Cloud Computing 18 min read

How Segment Routing Powers UCloud’s Next‑Gen Intelligent Backbone Network

This article examines the rapid evolution of UCloud's data‑center network, detailing the challenges of scaling MAN and backbone infrastructures, the transition from Backbone 1.0 to 2.0, and how Segment Routing and SR‑TE enable intelligent, reliable, and programmable cloud connectivity.

UCloud Tech
UCloud Tech
UCloud Tech
How Segment Routing Powers UCloud’s Next‑Gen Intelligent Backbone Network

Introduction

With the rapid adoption of cloud services, enterprises demand flexible, high‑performance connectivity between on‑premise resources, cloud VPCs, and across regions. Traditional MPLS backbones cannot meet these needs, leading to challenges such as flexible service provisioning, hybrid networking, QoS assurance, simplified deployment, bandwidth utilization, and intelligent, schedulable backbone operation.

DCN Network Rapid Iteration

UCloud’s data‑center network (DCN) has grown from a few zones in three major cities to 31 zones across 25 regions worldwide, prompting extensive MAN and backbone upgrades.

DCN iteration steps:

From a single‑zone to multi‑zone within a city.

From single‑region multi‑zone to multi‑region multi‑zone.

Network Challenges from Fast DCN Growth

MAN bandwidth and reliability: Flat deployments generate massive east‑west traffic across zones, demanding high bandwidth and reliability.

Backbone traffic engineering: Global inter‑region traffic requires new routing and traffic‑engineering capabilities.

Global Dedicated Line Resource Layout

UCloud operates over 500 CDN nodes in 25 regions, connecting them via dedicated lines to provide low‑latency, loss‑free inter‑region traffic.

Key capabilities:

Global dedicated‑line access to the backbone for stable end‑to‑end connectivity.

Flexible hybrid networking using local lines and last‑mile Internet.

Point‑to‑point protection and 99.99% availability via SR‑TE‑enabled backup paths.

UCloud Backbone Architecture Evolution

Backbone 1.0 (pre‑2018)

Designed in 2016‑2017, Backbone 1.0 used dedicated lines to connect each region’s MAN core (M‑Core). M‑Cores formed a global ISIS domain, established IBGP sessions via route reflectors (RR), and relied on ECMP and BGP ADD‑PATH for equal‑cost routing.

Design goals:

Region‑level DCN interconnect for cross‑region disaster recovery.

Support for UDPN‑based cross‑VPC connectivity.

New challenges:

Complex MPLS‑MAN coupling and difficult line provisioning.

Inability to interconnect disparate physical locations quickly.

Lack of intelligent traffic scheduling across regions.

Backbone 2.0 (pre‑2020)

Addressing Backbone 1.0 limitations, Backbone 2.0 introduced tenant isolation via VXLAN + BGP EVPN (instead of MPLS‑VPN) and split the architecture into Underlay and Overlay layers.

Underlay layer: TBR devices ingest carrier two‑layer lines; TER devices act as VTEPs, forming a global VXLAN fabric that enables any‑to‑any layer‑2 connectivity.

Overlay layer: Retains Backbone 1.0’s routing requirements, using VXLAN‑based inter‑M‑Core links, ISIS for IGP, and IBGP for route propagation.

Design goals:

Strict separation of MAN and backbone for easier operations.

Business‑centric access via VXLAN + BGP EVPN.

Rapid provisioning of resources across on‑premise and cloud locations.

New challenges:

No intelligent traffic scheduling.

Limited flexibility of dedicated‑line only access.

VXLAN overhead increases transport cost.

No L3VPN support.

Segment Routing Basics

Segment Routing

SR embeds an ordered list of segments (Segment‑list) in packet headers, allowing the source node to dictate the exact forwarding path without requiring intermediate state.

Common SID Types

Prefix‑SID : Global segment announced by IGP, directs traffic along ECMP‑compatible shortest paths.

Node‑SID : 32‑bit loopback address of a node, essentially a Prefix‑SID for the router ID.

Adjacency‑SID : Local segment that forces traffic onto a specific link, bypassing IGP shortest‑path decisions.

Anycast‑SID : Identical prefix‑SID configured on multiple nodes; traffic is steered to the nearest member, supporting ECMP and high availability.

SR‑Policy Features

SR‑Policy (SR‑TE) replaces traditional tunnel interfaces with a Segment‑list that encodes the desired forwarding path. A policy is identified by three elements:

Headend : Where the policy is generated.

Color : 32‑bit value representing intent (e.g., low‑latency, low‑cost).

Endpoint : Destination IPv4/IPv6 address.

Policies can be explicit (manually or controller‑programmed) or dynamic (computed automatically by the headend or a PCEP controller).

Automatic SR‑Policy Steering

When a BGP route is received, the egress PE colors the route (assigns a Color). The headend then automatically computes an SR‑Policy and installs it, steering traffic without complex configuration. This provides fine‑grained control with no performance penalty.

Why SR/SR‑TE over LDP/RSVP‑TE

UCloud chose SR/SR‑TE for the new backbone because it offers simpler provisioning, native ECMP support, and flexible traffic engineering compared with LDP/RSVP‑TE.

Conclusion

The rapid growth of UCloud’s DCN created significant MAN and backbone challenges. By evolving from Backbone 1.0 to 2.0 and now adopting Segment Routing with SR‑TE, UCloud achieves intelligent, reliable, and programmable inter‑region connectivity, while still addressing remaining gaps such as L3VPN support and last‑mile Internet integration.

Network Architecturecloud computingBackbone NetworkSegment RoutingSR-TE
UCloud Tech
Written by

UCloud Tech

UCloud is a leading neutral cloud provider in China, developing its own IaaS, PaaS, AI service platform, and big data exchange platform, and delivering comprehensive industry solutions for public, private, hybrid, and dedicated clouds.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.