Operations 15 min read

How eBPF Toolchains Simplify Kernel Tracing from BCC to BPFtrace

This article walks through the high‑level components of eBPF programs—backend, loader, frontend, and data structures—showing how the original sock_example.c is split into separate files, how LLVM compiles restricted C to ELF, and how projects like BCC, BPFtrace, and IOVisor automate development, deployment, and cloud‑native observability while highlighting their trade‑offs for embedded environments.

Qingyun Technology Community
Qingyun Technology Community
Qingyun Technology Community
How eBPF Toolchains Simplify Kernel Tracing from BCC to BPFtrace

In this part we define the four high‑level components of an eBPF program: the backend (kernel‑loaded bytecode that writes to maps and ring buffers), the loader (process that loads the bytecode into the kernel), the frontend (reads data from maps and presents it to the user), and the data structures (maps and ring buffers that mediate communication).

Backend : eBPF bytecode executed in the kernel, writing to maps and ring buffers.

Loader : loads the backend bytecode; it is automatically unloaded when the loader process exits.

Frontend : reads data written by the backend and displays it.

Data structures : kernel‑managed maps and ring buffers that must exist before the backend is loaded.

The original sock_example.c places all components in a single C file. In the BCC‑based rewrite the same logic is split:

Lines 40‑45 create the map data structures.

Lines 47‑61 define the backend.

Lines 63‑76 load the backend into the kernel.

Lines 78‑91 implement the frontend that prints packet counts.

More complex eBPF programs can have multiple backends, loaders, and frontends interacting across processes.

LLVM compiles "restricted C" (no unbounded loops, limited instruction count) into an ELF object containing the eBPF bytecode. The ELF is loaded with the bpf() syscall via libbpf. This separates backend definition from the loader and frontend.

#include <uapi/linux/bpf.h>
#include <uapi/linux/if_ether.h>
#include <uapi/linux/if_packet.h>
#include <uapi/linux/ip.h>
#include "bpf_helpers.h"

struct bpf_map_def SEC("maps") my_map = {
    .type = BPF_MAP_TYPE_ARRAY,
    .key_size = sizeof(u32),
    .value_size = sizeof(long),
    .max_entries = 256,
};

SEC("socket1")
int bpf_prog1(struct __sk_buff *skb) {
    int index = load_byte(skb, ETH_HLEN + offsetof(struct iphdr, protocol));
    long *value;
    value = bpf_map_lookup_elem(&my_map, &index);
    if (value)
        __sync_fetch_and_add(value, skb->len);
    return 0;
}

char _license[] SEC("license") = "GPL";

The resulting ELF sockex1_kern.o contains the backend and its maps. The loader and the user‑space frontend ( sockex1_user.c) parse the ELF, create the maps, load the bytecode, and then the frontend reads and prints the results.

Introducing the restricted‑C abstraction makes backend code easier to write in higher‑level languages (C, Go, Rust) but adds loader complexity (ELF parsing). The frontend remains largely unaffected.

The BCC project automates this workflow: it provides a compiler collection that turns restricted C into eBPF bytecode, and a set of Python/Lua scripts that act as loaders and frontends. BCC standardises the map API so that frontends can interact with a minimal two‑line Python loader.

BCC compiler collection : framework for writing BCC tools.

BCC‑tools : ready‑made eBPF programs and examples.

For higher‑level tracing, BPFtrace offers a DTrace‑like DSL. Example:

bpftrace -e 'tracepoint:raw_syscalls:sys_enter {@[pid, comm] = count();}'

BPFtrace abstracts many details but still relies on BCC for loading socket‑filter programs. It excels for quick analysis but may lack features for complex socket filtering.

In cloud environments the IOVisor project builds on the eBPF VM to provide a user‑space ecosystem (Hover framework) that manages eBPF modules, pushes them to the cloud, and offers a CLI and web UI. However, Hover’s Go‑based components increase binary size, making it unsuitable for small 32‑bit ARM devices.

In summary, the eBPF user‑space ecosystem—BCC, BPFtrace, and IOVisor—greatly simplifies program development and deployment, but the large footprint of these tools limits their applicability on resource‑constrained embedded devices, a topic explored in the next part of the series.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

cloud nativeoperationsLinuxeBPFkernel tracingBCCbpftrace
Qingyun Technology Community
Written by

Qingyun Technology Community

Official account of the Qingyun Technology Community, focusing on tech innovation, supporting developers, and sharing knowledge. Born to Learn and Share!

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.