How to Enable Cross‑VPC Function Compute with NAT and VXLAN
This article explains a VPC NAT solution that lets function compute pods in a shared Kubernetes VPC securely access services in overlapping business VPCs by using NAT ENIs, MAC adjustments, VXLAN encapsulation, and SNAT/DNAT rules.
Background
In the function compute service, user tasks run as Kubernetes Pods in a shared VPC. Different business users have their own VPCs with overlapping IP ranges, causing Pods from different users to target the same IP address, leading to traffic ambiguity.
<code># -*- coding: utf8 -*-
import redis
def main_handler(event, context):
r = redis.StrictRedis(host='172.16.0.3', port=6379, db=0, password="crs-i4kg86dg:abcd1234")
print(r.set('foo', 'bar'))
print(r.get('foo'))
return r.get('foo')
</code>When two users A and B each deploy a Redis service at 172.16.0.3 in their own VPCs and invoke the above function, the Pods share the same source address in the unified VPC, and the vSwitch cannot distinguish which VPC the traffic belongs to.
Solution Options
Data Flow Design
The VPC NAT gateway forwarding logic works as follows: when a function Pod is created, the CNI Agent injects a route to the business VPC, adjusting the target MAC to the unique NAT ENI MAC. The vSwitch distinguishes traffic by the target MAC, assigns the appropriate VNI, and forwards the packet to the VPC NAT gateway. The NAT gateway matches the source CIDR, VNI, and destination CIDR to a NAT policy, performs SNAT to replace the source address with the NAT ENI address, creates a session, and encapsulates the packet in VXLAN toward the target service. The service’s reply follows the reverse path: the VPC NAT gateway performs DNAT using the session information, restores the destination to the function Pod’s IP, re‑encapsulates in VXLAN, and delivers the packet to the Pod’s physical node.
For public‑Internet traffic from function Pods, the VPC NAT table includes a default policy that SNATs unmatched traffic to the business VPC’s EIP, pushes it to the underlay network, and after the return path’s DNAT restores the destination to the function Pod’s IP.
Control Flow Design
User creates a vpc_nat on the HULK platform for a function Pod (VPC A) to access services in a business VPC (VPC B).
HULK calls ultron to create the vpc_nat, passing the IDs of VPC A and VPC B.
Ultron calls neutron to create resources: (a) a port in VPC B with device_owner "network:vpc_nat"; the port’s IP (NAT IP) serves as the identity for VPC A accessing VPC B; (b) a vpc_nat in VPC A using the port’s IP, MAC, and VPC B’s tunnel ID; neutron then issues the underlying forwarding rules.
HULK calls the NAT gateway with the CIDRs of VPC A and VPC B; the NAT gateway installs the corresponding forwarding rules.
Conclusion
The virtual network cross‑VPC NAT enables instances in different VPCs to communicate despite overlapping address spaces, leveraging existing NAT gateway capabilities while extending them. The solution is easy to use, highly scalable, and can be applied to services beyond function compute.
360 Zhihui Cloud Developer
360 Zhihui Cloud is an enterprise open service platform that aims to "aggregate data value and empower an intelligent future," leveraging 360's extensive product and technology resources to deliver platform services to customers.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.