Cloud Computing 32 min read

Mastering VxLAN: From Fundamentals to Advanced EVPN Integration

This article provides a comprehensive guide to VxLAN technology, covering its architecture, tunneling mechanisms, NVE/VTEP roles, L2/L3 gateway designs, MTU considerations, and the integration of EVPN MP‑BGP with detailed explanations of route types and deployment scenarios.

AI Cyberspace
AI Cyberspace
AI Cyberspace
Mastering VxLAN: From Fundamentals to Advanced EVPN Integration

VxLAN Overview

In August 2011, VMware and Cisco drafted RFC 7348, defining Virtual Extensible Local Area Network (VxLAN) as an overlay tunneling technology that encapsulates Layer‑2 frames inside UDP packets for transport over Layer‑3 networks.

VxLAN enables devices in different L3 subnets to appear on a single logical L2 domain, supporting large‑scale multi‑tenant cloud environments.

VxLAN Topology Components

NVE / VTEP / VxLAN Tunnel

VxLAN L2 Gateway / L2 VNI / Bridge Domain (BD)

VxLAN L3 Gateway / L3 VNI / VRF

NVE / VTEP / VxLAN Tunnel

The Network Virtualization Edge (NVE) sits at the boundary between the underlay and overlay networks, providing virtualization functions. An NVE can be a software instance (e.g., OVS Tun Bridge, Linux Bridge + Tun) or a hardware device (e.g., spine/leaf switch). It must implement VTEP capabilities to encapsulate and decapsulate VxLAN traffic.

VxLAN uses a VXLAN Network Identifier (VNI) instead of VLAN IDs to separate virtual networks.

VxLAN L2 Gateway / L2 VNI / BD

The L2 gateway connects VxLAN to VLANs. In leaf NVE devices, a Bridge Domain (BD) maps a local VLAN ID to an L2 VNI. Two types of interfaces are defined:

Overlay side interface : a Layer‑2 sub‑interface on the host side that receives original L2 frames for encapsulation.

Underlay side interface : a Layer‑3 interface with an IP address used to forward the encapsulated packets across the underlay.

The BD maintains a VID‑VNI mapping, allowing traffic from the local VLAN to be injected into the appropriate VxLAN tunnel.

VxLAN L3 Gateway / L3 VNI / VRF

The L3 gateway provides inter‑VNI routing and connectivity to external IP networks. It uses VRF instances to keep routing tables separate per tenant. Two deployment models exist:

Centralized L3 gateway placed on spine switches.

Distributed L3 gateway deployed on leaf switches.

The centralized model simplifies management but may cause sub‑optimal paths and ARP table pressure on the spine, making it suitable for small networks. The distributed model scales better for medium‑large deployments but requires more complex configuration, often leveraging EVPN for automation.

VxLAN Protocol Stack

The stack consists of:

Original L2 frame (payload).

VxLAN header (8 bytes) containing flags, reserved fields, and a 24‑bit VNI.

UDP header (8 bytes) providing low‑overhead transport and hash‑based load balancing.

Outer IP header (20 bytes) for underlay routing.

Outer MAC header for the underlay Ethernet frame.

Data‑Plane Forwarding

BUM (Broadcast, Unknown‑unicast, Multicast) Traffic

BUM traffic can be handled by Head‑End Replication (HER) or Core Replication (CR). VxLAN hardware NVE devices support both methods, with HER replicating packets at the source VTEP and CR replicating at dedicated core devices.

Same‑VNI Unicast

Unicast forwarding involves ARP request broadcast, ARP reply unicast, and IP unicast phases. The article details the encapsulation/decapsulation steps for a VM‑A to VM‑C communication example.

Different‑VNI Unicast (L3 Gateway)

Cross‑VNI traffic uses the VxLAN L3 gateway. The centralized and distributed L3 gateway designs are described, illustrating how packets traverse spine or leaf devices and how VRF routing is applied.

MTU Considerations

Because VxLAN adds roughly 50 bytes of overhead, the encapsulated packet can exceed the standard 1500‑byte MTU. To avoid fragmentation, either reduce the VM/host MTU or increase the underlay MTU (e.g., to 9000 bytes in DCN scenarios). The article provides a detailed byte‑level breakdown for DCN and DCI environments.

EVPN MP‑BGP

EVPN replaces legacy VPLS and serves as a control plane for various data‑plane technologies, including VxLAN. It introduces new route types (Type 1‑5) that convey MAC, IP, multicast, and prefix information.

EVPN VxLAN Integration

Manual static VxLAN tunnel configuration is impractical for large deployments. EVPN automates VTEP discovery, tunnel establishment, and MAC/IP advertisement, reducing BUM flooding and simplifying operations.

Route Types Used in EVPN VxLAN

Type 2 – MAC/IP Advertisement Route : Carries host MAC, IP, L2 VNI, and L3 VNI, enabling integrated bridging and routing.

Type 3 – Inclusive Multicast Ethernet Tag Route : Distributes VTEP IPs and VNI information for head‑end replication lists.

Type 5 – IP Prefix Route : Advertises IP prefixes (or host routes) associated with an L3 VNI, reducing routing table size and enabling external network connectivity.

Examples illustrate how leaf switches exchange Type 2 routes to learn host MAC/IP tables, how Type 3 routes build replication lists, and how Type 5 routes announce tenant subnets to other VTEPs.

Key Takeaways

VxLAN provides a scalable overlay for L2 extension over L3.

Proper NVE/VTEP design and gateway placement are critical for performance and manageability.

MTU planning must account for VxLAN overhead to avoid fragmentation.

EVPN MP‑BGP automates control‑plane functions, reduces BUM traffic, and supports both L2 and L3 services.

Network VirtualizationData CenterVXLANEVPNOverlay Networking
AI Cyberspace
Written by

AI Cyberspace

AI, big data, cloud computing, and networking.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.