Cloud Native 12 min read

How We Evolved K8s Networking: From Flannel to MAC‑VLAN and VPN

This article details the step‑by‑step evolution of Mafengwo's Kubernetes network—from early Flannel VXLAN setups, through a VPN‑server bridge for external access, to a MAC‑VLAN CNI solution—highlighting design principles, challenges, and recent optimization plans for large‑scale Java micro‑services.

Mafengwo Technology
Mafengwo Technology
Mafengwo Technology
How We Evolved K8s Networking: From Flannel to MAC‑VLAN and VPN

Part 1: K8s Network Principles and Challenges

Kubernetes provides an application‑level cluster abstraction that handles resource scheduling, deployment, service discovery, scaling, and more; its network design is complex, and this article shares the evolution of the K8s network used for most Java services at Mafengwo.

1. Kubernetes Pod Design

A Pod is the basic scheduling unit, consisting of a pause container and one or more tightly coupled business containers. All containers in a Pod share the same network namespace and can communicate via localhost. Each Pod receives a unique cluster‑wide IP (Pod IP) allowing services to use the same port without conflict.

2. Challenges

Pod IPs are virtual and local to the cluster, making it difficult for external applications to reach containers directly. The article presents several solutions for accessing Pod IPs when they are virtual or real.

Part 2: Evolution of K8s Container Network

Stage 1 – K8s + Flannel

Initially the team used Flannel VXLAN + kube‑proxy to connect physical‑machine Java applications with containers during a mixed‑run period. Flannel allocates a subnet per host, enabling cross‑host container communication without NAT.

Flannel supports three transport modes:

VXLAN – default, encapsulates packets at the kernel level.

Host‑gw – layer‑2 configuration, not suitable for cloud environments.

UDP – typically used for debugging.

Stage 2 – K8s + Flannel + VPN‑Server

To allow developers outside the data‑center to reach container services, an OpenVPN server was deployed as a NodePort service. Clients connect to the VPN, gaining direct access to Pod IPs. This solution is quick and secure but requires certificate management and manual VPN startup.

Stage 3 – MAC‑VLAN CNI

To support large‑scale Java micro‑services, the team adopted a MAC‑VLAN CNI. MAC‑VLAN creates virtual interfaces with distinct MAC addresses on a physical parent interface, allowing containers to obtain real IPs. Two IP allocation methods (DHCP or host‑local) were considered; manual IP assignment was chosen due to the lack of a centralized DHCP/IPAM service.

Because components like kube‑dns and nginx‑ingress need ClusterIP, they remain on a virtual network, while business containers use MAC‑VLAN, implementing a “partition‑by‑purpose” strategy.

Part 3: Recent Optimization Directions

Future work includes migrating production clusters to the new network, multi‑datacenter deployment, and refactoring nginx‑ingress to use public DNS and eliminate virtual network dependencies.

cloud nativeKubernetesNetworkCNIVPNflannelMAC VLAN
Mafengwo Technology
Written by

Mafengwo Technology

External communication platform of the Mafengwo Technology team, regularly sharing articles on advanced tech practices, tech exchange events, and recruitment.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.