From Service Mesh 1.0 to 2.0: Solving Envoy Pain Points and Shaping the Future
This article traces the evolution of service mesh from its early concepts through microservice 1.0 and 2.0, analyzes the shortcomings of Envoy‑based sidecars, and explores future directions such as Go‑based data planes, WASM plugins, eBPF/VPP acceleration, QUIC transport, and on‑demand xDS delivery.
On September 1, 2018 Bilgin Ibryam published "Microservices in a Post‑Kubernetes Era" on InfoQ, arguing that service mesh has become the next‑generation microservice technology.
Service mesh is examined through eight historical stages:
Computer networking imagination era
ARPANET era
TCP/IP era
Microservice 1.0 era
Microservice 2.0 era
Service Mesh 1.0 era
Service Mesh 2.0 era
Future evolution of Service Mesh
Part I – Overcoming Envoy Maintenance Challenges
Envoy’s C++ codebase is hard to adopt and maintain. Two main solutions are proposed: rewriting the data plane in an easier language such as Go, or extending Envoy with plugins written in other languages.
Ant Group’s MOSN (Modular Open Smart Network), a Go‑based sidecar, replaces Envoy and has been validated in large‑scale production.
Plugin approaches include native C++ extensions, scripting languages (Lua, Node.js), cgo‑based filters, and Envoy filter integrations. WebAssembly (WASM) is highlighted as a language‑agnostic, secure way to add custom logic to the data plane, with support from Envoy, Istio, MOSN and OpenResty.
Part II – Performance Optimizations (eBPF, VPP, QUIC)
Envoy’s sidecar adds latency due to request interception, rule evaluation, and proxying. Optimizations focus on high‑efficiency forwarding and communication.
Replacing iptables with BPF‑based bpfilter to reduce kernel‑user transitions.
Integrating VPP or Cilium (eBPF) to process packets in user or kernel space, avoiding costly copies.
QUIC, a UDP‑based transport, eliminates TCP three‑way handshake and TLS round‑trip delays, offering faster connection setup and multiplexed streams. Envoy is being re‑engineered to use QUIC for sidecar‑to‑sidecar communication.
Part III – Tackling xDS Full‑Push Bottlenecks
xDS (X Discovery Service) delivers configuration to sidecars. Full‑push of all mesh data leads to excessive memory usage, slow updates, and difficult troubleshooting.
The proposed solution is on‑demand xDS delivery, sending only the configuration required by each sidecar based on service dependency graphs, thereby reducing memory footprint and improving scalability.
Overall, the article outlines the past, present, and future of service mesh, emphasizing the need for easier data‑plane implementations, performant packet processing, modern transport protocols, and smarter configuration distribution.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
