Design and Architecture of CLOUD‑DPVS Gateway for VPC‑to‑IDC Connectivity
The article describes the design, architecture, and implementation details of the CLOUD‑DPVS gateway, a high‑performance, VXLAN‑based load‑balancing solution that connects VPC networks to classic IDC networks, covering its high‑availability improvements, FULLNAT mode, traffic flow, and future offload plans.
Background: With the continuous development of cloud computing and networking, many services require migration to the cloud. After migration, services can leverage cloud‑provided resources and elastic scaling, but enterprises often have a hybrid deployment where part of the workload runs on a cloud platform and part remains in their own IDC. To enable seamless VPC‑to‑IDC connectivity, the core device is a VXLAN gateway that maps VXLAN networks to VLAN networks. Because traditional switch‑based VXLAN‑to‑VLAN conversion cannot satisfy load‑balancing requirements, the 360 virtualization team developed the CLOUD‑DPVS device to support load balancing, VXLAN tunneling, BFD detection, and other functions.
Overall Architecture: CLOUD‑DPVS operates in the middle layer between VXLAN and VLAN networks. User requests from the VPC are redirected to the CLOUD‑DPVS gateway, where VXLAN decapsulation and SNAT/DNAT processing occur before forwarding the packets to the IDC servers. Return traffic follows the reverse path, undergoing SNAT/DNAT and VXLAN encapsulation before being sent back to the VPC.
High‑Availability Improvements: Traditional HA solutions based on BGP + ECMP provide dynamic failover but suffer from convergence times of several seconds. CLOUD‑DPVS introduces BFD detection to reduce convergence to the millisecond level and adds a scheduler in the VPC network to hash traffic to servers without relying on the underlying network.
BFD Support: BFD (Bidirectional Forwarding Detection) offers rapid fault detection across various media and protocols, providing sub‑second detection times. CLOUD‑DPVS implements a BFD processing module mounted on INET_HOOK_PRE_ROUTING , which identifies BFD packets, replies with appropriate state messages, and updates hash calculations based on detection results.
Load‑Balancing Modes and FULLNAT: DPVS supports NAT, Tunnel, DR, and FULLNAT modes. NAT, DR, and Tunnel have environmental constraints, whereas FULLNAT performs DNAT + SNAT on inbound packets and SNAT + DNAT on outbound packets, eliminating the need for special routing on real servers and providing greater flexibility. CLOUD‑DPVS adopts the FULLNAT mode.
Introducing VPC Concept: The original DPVS was designed for classic IDC networks, where IP addresses must be unique across all users. In a VPC, users can freely assign private IPs, leading to potential IP duplication. To accommodate this, CLOUD‑DPVS associates services with VPCs, allowing identical VIP:PORT pairs across different VPCs. The service is identified by the tuple VXLAN + VIP + vPort .
Service Information Table:
Vxlan
VIP
vPort
RS‑IP
Port
96
172.16.25.13
80
10.182.10.13
80
96
172.16.25.13
80
10.182.10.23
80
101
172.16.25.13
8080
10.182.20.2
80
Virtual Machine Information Table:
Vxlan
Virtual IP
Virtual MAC
Host IP
96
172.16.25.30
fa:16:3f:5d:6c:08
10.162.10.13
96
172.16.25.23
fa:16:3f:5d:7c:08
10.162.10.23
101
172.16.25.30
fa:16:3f:5d:6c:81
10.192.20.2
Traffic Flow Description: A VPC client accesses a service via VIP:vPort . The traffic is steered by OVS flow rules into a VXLAN tunnel, reaching the CLOUD‑DPVS gateway where the VXLAN header is stripped, and the inner VXLAN‑ID , destination IP, and port identify the target service. CLOUD‑DPVS selects a real server based on a scheduling algorithm, rewrites the packet headers, and forwards it to the IDC server. The response follows the reverse path: CLOUD‑DPVS restores the original mapping, re‑encapsulates the packet with a VXLAN header, and sends it back through the underlay network to the VPC client.
VXLAN Module: In cloud scenarios, all VPC requests are VXLAN‑encapsulated. CLOUD‑DPVS implements a VXLAN module in the forwarding layer to decapsulate inbound VXLAN packets before forwarding them to IDC servers, ensuring the backend is VPC‑agnostic. Outbound packets are re‑encapsulated before returning to the VPC.
Future Work: While CLOUD‑DPVS currently enables VPC‑to‑IDC traffic, the next step is to support IDC‑to‑VPC connectivity. The VXLAN module is currently software‑based; future plans include leveraging smart NICs to offload VXLAN processing.
References:
https://github.com/iqiyi/dpvs
https://yq.aliyun.com/articles/497058
https://tools.ietf.org/html/rfc5880#section-6.8.5
360 Tech Engineering
Official tech channel of 360, building the most professional technology aggregation platform for the brand.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.