Cloud Connect Network: Architecture and Implementation for Cross‑VPC and Cross‑Region Interconnect
This article describes 360's Cloud Connect Network solution, covering its background, application scenarios, design, server architecture, traffic forwarding mechanisms, CCSI traffic isolation, performance optimizations with DPDK, and future enhancements for multi‑VPC and IDC interconnectivity.
01 Background With the rapid development of cloud computing and networking, many businesses require migration to the cloud to improve development efficiency and leverage elastic scaling. 360 aims to move all services to the cloud, using VPCs for flexible IP ranges, routing, and isolation while still needing inter‑VPC communication across regions.
02 Cloud Connect Network Introduction Cloud Connect Network (CCN) provides a fast, high‑quality, stable interconnection capability between VPCs across regions and between cloud VPCs and on‑premise data centers. It enables on‑demand full‑mesh connectivity without the complexity of manual peering.
2.1 Application Scenarios • Multi‑VPC interconnect – add several VPCs to a single CCN instance for full mesh. • Cross‑region multi‑VPC interconnect – connect VPCs in Beijing and Hong Kong, allowing private network communication and data transfer.
2.2 Difference from Peering/Direct Connect Traditional peering requires O(n²) connections and cannot handle overlapping CIDRs, while Direct Connect is one‑to‑one. CCN needs only one instance; all VPCs and IDC can join, supporting overlapping CIDRs and automatic route learning.
03 Cloud Connect Network Implementation
3.1 Design VPCs connect to the CCN cluster via VXLAN tunnels. Each group of interconnected VPCs corresponds to a virtual switch instance. The team introduced Cloud Connect Switch Instance (CCSI) based on Linux VRF to isolate routing tables between groups.
3.2 Server Architecture The gateway uses high‑performance DPDK forwarding, user‑space protocol stacks, and CCSI routing isolation. Features include kernel bypass, zero‑copy, huge pages, polling, and lock‑free processing to achieve line‑rate performance on a single core.
3.3 Traffic Forwarding Principle VPC traffic is encapsulated into VXLAN, sent to the CCN gateway, decapsulated, and looked up in the CCSI routing table to obtain the destination VIP and VXLAN ID. The gateway then queries the VM information table for host IP and MAC, re‑encapsulates the packet, and forwards it through the underlay network to the target VPC.
3.4 CCSI Traffic Isolation Each CCSI instance holds an independent routing table (destination subnet, VXLAN ID, VIP). Overlapping CIDRs across VPCs are handled by placing them in separate CCSI instances, ensuring isolated routing and preventing duplicate entries.
04 Soft‑Hardware Performance Enhancements To meet a target of tens of millions of packets per second, the team leveraged DPDK’s kernel bypass, RX steering, CPU affinity, zero‑copy, polling, lock‑free design, and hardware offload, achieving line‑rate small‑packet forwarding on a single core.
05 Future Optimizations Planned features include support for multi‑VPC to IDC interconnect and offload of QoS, ACL, and packet processing to 25 G/100 G NICs (Mellanox) for further performance gains.
06 References AWS VPC peering documentation, Alibaba Cloud article, and RFC 5880.
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.