How eBPF Powers Modern Software Network Functions
The article examines why eBPF has become a core building block for cloud‑native network functions, outlines its performance, security and flexibility advantages, discusses technical challenges such as memory constraints and missing SIMD support, and presents the eNetSTL library that mitigates these issues with concrete design details and benchmark results.
Advantages of Implementing Network Functions with eBPF
eBPF‑based network functions are now deployed in production by many companies and form a critical part of cloud infrastructure. Notable examples include Meta's Katran load balancer and Google Cloud's eBPF data plane. Academic research projects such as BMC (NSDI 2021), SPRIGHT (SIGCOMM 2023), Morpheus (ASPLOS 2022), Electrode (NSDI 2023) and DINT (NSDI 2024) as well as open‑source projects like Cilium, PolyCube and Katran demonstrate a growing trend.
Key advantages are:
Native kernel integration: eBPF, originating from the kernel, fits naturally into kernel‑centric cloud ecosystems. For instance, the OvS team showed that the XDP data path outperforms OvS's DPDK path for intra‑host container communication.
Balanced performance, CPU utilization, security, isolation and operational cost compared with DPDK‑based solutions. eBPF can handle high‑throughput packet processing without saturating the CPU, allowing network and non‑network workloads to coexist on the same host.
Dynamic, safe user‑code loading without kernel source changes, which improves maintainability, flexibility and speeds up development and deployment.
Technical Challenges of Using eBPF for Network Functions
2.1 Inability to Implement Certain Functions
eBPF imposes strict limits on non‑contiguous memory usage, preventing implementation of core components such as skip‑list key‑value stores or red‑black‑tree priority queues. Although Linux 6.1+ supports dynamic memory allocation persisted in BPF maps, the verifier still restricts maps to a fixed number of dynamic objects, so variable‑size dynamic memory remains unsupported.
Example code (illustrated in the accompanying image) shows that eBPF can allocate memory but cannot handle a variable number of allocations.
2.2 Sub‑optimal Performance
eBPF’s RISC‑V‑like instruction set lacks SIMD and bitscan instructions (e.g., FFS), causing performance drops. In sketch‑based network functions, the absence of SIMD leads to a 49.2% slowdown. Additionally, the helper bpf_get_prandom_u32 incurs high overhead; invoking it per packet reduces NitroSketch performance by 46.6%.
2.3 Limitations of Existing Solutions
Two broad approaches are considered:
Extending the eBPF architecture (new instructions, validator enhancements, user‑space verification, new runtime and language‑level safety mechanisms). This requires invasive kernel changes across up to 14 hardware architectures and risks new bugs and security issues, making deployment difficult.
Implementing unsupported or slow functions as kernel modules (using kptr/kfunc) or integrating them directly into the kernel (new helpers and BPF maps). Full kernel integration would cause large code churn and could destabilize the kernel, while per‑function integration risks frequent module swaps and instability.
Standard‑Library‑Based Optimisation: eNetSTL
3.1 Common Design Patterns in Network Functions
Typical patterns include:
Bit‑scan instructions (FFS, POPCNT) for fast priority‑queue lookups.
Parallel computation of multiple hash functions for sketches and Bloom filters.
Basic data structures such as top‑k heaps and bucket linked lists.
Probabilistic random‑number usage for heavy‑hitter detection.
Non‑contiguous memory structures like skip‑lists and red‑black trees.
Storing data in contiguous memory (e.g., DPDK’s cuckoo hash) to reduce collisions.
3.2 Design and Implementation of the eNetSTL Library
eNetSTL provides a high‑performance, low‑overhead API library for eBPF without requiring kernel modifications. It leverages kernel functions (kfunc) and kernel pointers (kptr) implemented in a kernel module, keeping the rest of the library self‑contained and compatible across kernel versions.
The library consists of:
Memory wrapper : Enables safe use of non‑contiguous memory while preserving eBPF’s security guarantees.
Algorithms : Bit‑operations, SIMD‑based parallel hash computation, and parallel comparison algorithms.
Data structures : List‑bucket structures and a random‑number pool supporting GEO‑distributed values.
The memory wrapper uses a proxy kptr to manage newly allocated node kptrs, bypassing the BPF‑MAP limitation of static kptr counts. eNetSTL routes pointers via kfuncs, adding a KF_ACQUIRE tag to safely acquire the next node’s pointer, allowing direct access such as a->b inside eBPF.
Key Memory‑wrapper APIs are shown in the following diagram:
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
