Cloud Native 12 min read

Mastering eBPF Maps: Design, Implementation, and Real‑World Use Cases

This article provides an in‑depth analysis of BPF maps—explaining their design principles, core features, various map types with code examples, and the macro expansion process that turns high‑level BCC helpers into native kernel map definitions for cloud‑native observability.

Big Data Technology Tribe
Big Data Technology Tribe
Big Data Technology Tribe
Mastering eBPF Maps: Design, Implementation, and Real‑World Use Cases

With the rapid development of cloud‑native and observability technologies, eBPF (extended Berkeley Packet Filter) has become one of the most important innovations in the Linux kernel. Within the eBPF ecosystem, BPF maps are the core component that serve as a data bridge between kernel space and user space and form the foundation for complex tracing, monitoring, and network processing programs.

What are BPF maps?

BPF maps provide generic storage of various types that can be shared between the kernel and user space. Available storage types include hash tables, arrays, bloom filters, and radix trees. Some map types exist to support specific BPF helper functions, which operate on the map contents. In BPF programs, maps are accessed via BPF helper functions documented in the bpf‑helpers(7) manual page.

Core Features

Persistent storage

Map lifecycle is independent of the eBPF program

Data remains after program reload

Supports inter‑process data sharing

Efficient access

Zero‑copy data transfer

Atomic operation support

Per‑CPU optimized concurrent access

Diverse types

Hash tables, arrays, stacks, queues, etc.

Specialized optimizations for different scenarios

Main map types

1. BPF_MAP_TYPE_HASH – General hash table

Hash tables provide O(1) key‑value access performance. Typical use cases include process state tracking, network connection storage, and dynamic configuration management.

<code>// Define a struct for process information used as the value type
struct proc_info {
    u64 start_time;
    u64 cpu_time;
    char comm[16];
};

// Declare a hash map named process_map with key type u32 and value type proc_info, capacity 10240
BPF_HASH(process_map, u32, struct proc_info, 10240);

int trace_exec(struct pt_regs *ctx) {
    u32 pid = bpf_get_current_pid_tgid() >> 32;
    struct proc_info info = {};
    info.start_time = bpf_ktime_get_ns();
    bpf_get_current_comm(&amp;info.comm, sizeof(info.comm));
    // Update the hash map with the pid as key and info as value
    process_map.update(&amp;pid, &amp;info);
    return 0;
}
</code>

2. BPF_MAP_TYPE_PERCPU_HASH – High‑performance per‑CPU statistics

This is a hash map variant where each CPU has its own copy, eliminating lock contention and making it ideal for high‑frequency counting.

<code>// System call counter map
BPF_PERCPU_HASH(syscall_stats, u32, u64, 512);

int count_syscalls(struct pt_regs *ctx) {
    u32 syscall_nr = ctx-&gt;orig_ax;
    u64 *count = syscall_stats.lookup(&amp;syscall_nr);
    if (count) {
        __sync_fetch_and_add(count, 1);
    } else {
        u64 initial = 1;
        syscall_stats.update(&amp;syscall_nr, &amp;initial);
    }
    return 0;
}
</code>

3. BPF_MAP_TYPE_ARRAY – Efficient array access

Arrays provide O(1) indexed access, suitable for fixed‑size data sets.

<code>// CPU usage monitoring array supporting 256 CPUs
BPF_ARRAY(cpu_usage, u64, 256);

int sample_cpu_usage(struct pt_regs *ctx) {
    int cpu = bpf_get_smp_processor_id();
    u64 timestamp = bpf_ktime_get_ns();
    cpu_usage.update(&amp;cpu, timestamp);
    return 0;
}
</code>

4. BPF_MAP_TYPE_PERF_EVENT_ARRAY – Efficient event transmission

This is the standard mechanism for transferring event data from kernel to user space.

<code>// Define a custom file_event structure
struct file_event {
    u32 pid;
    u64 timestamp;
    char filename[256];
};

BPF_PERF_OUTPUT(events);

int trace_openat(struct pt_regs *ctx) {
    struct file_event event = {};
    event.pid = bpf_get_current_pid_tgid() >> 32;
    event.timestamp = bpf_ktime_get_ns();
    const char __user *filename = (char *)PT_REGS_PARM2(ctx);
    bpf_probe_read_user_str(&amp;event.filename, sizeof(event.filename), filename);
    events.perf_submit(ctx, &amp;event, sizeof(event));
    return 0;
}
</code>

5. BPF_MAP_TYPE_RINGBUF – Modern ring buffer

Ring Buffer is a modern replacement for Perf Event Array, offering better memory efficiency, variable‑length records, and reduced user‑space polling overhead.

<code>BPF_RINGBUF_OUTPUT(events, 1 &lt;&lt; 20); // 1 MiB buffer

int trace_network(struct pt_regs *ctx) {
    struct net_event *event = events.ringbuf_reserve(sizeof(*event));
    if (!event) return 0;
    event-&gt;src_ip = get_src_ip(ctx);
    event-&gt;dst_ip = get_dst_ip(ctx);
    event-&gt;timestamp = bpf_ktime_get_ns();
    events.ringbuf_submit(event, 0);
    return 0;
}
</code>

How BCC high‑level map interfaces are translated to kernel maps

Using the BPF_HASH() macro as an example, the expansion proceeds through several layers of macros defined in

src/cc/export/helpers.h

:

1. Macro expansion hierarchy

<code>#define BPF_HASHX(_1, _2, _3, _4, NAME, ...) NAME
#define BPF_HASH(...) \
    BPF_HASHX(__VA_ARGS__, BPF_HASH4, BPF_HASH3, BPF_HASH2, BPF_HASH1)
#define BPF_HASH4(_name, _key_type, _leaf_type, _size) \
    BPF_TABLE("hash", _key_type, _leaf_type, _name, _size)
#define BPF_TABLE(_table_type, _key_type, _leaf_type, _name, _max_entries) \
    BPF_F_TABLE(_table_type, _key_type, _leaf_type, _name, _max_entries, 0)
</code>

2. Core conversion: BPF_F_TABLE macro

<code>#define BPF_F_TABLE(_table_type, _key_type, _leaf_type, _name, _max_entries, _flags) \
struct _name##_table_t { \
    _key_type key; \
    _leaf_type leaf; \
    _leaf_type * (*lookup) (_key_type *); \
    _leaf_type * (*lookup_or_init) (_key_type *, _leaf_type *); \
    u32 max_entries; \
    int flags; \
}; \
__attribute__((section("maps/" _table_type))) \
struct _name##_table_t _name = { .flags = (_flags), .max_entries = (_max_entries) }; \
BPF_ANNOTATE_KV_PAIR(_name, _key_type, _leaf_type)
</code>

3. Key mechanism: section attribute

<code>__attribute__((section("maps/" _table_type)))
</code>

The

__attribute__

extension tells GCC/Clang to place the generated structure into a specific ELF section, e.g.,

maps/hash

for a hash map.

4. BCC compiler handling

In

src/cc/frontends/clang/b_frontend_action.cc

, the

BTypeVisitor::VisitVarDecl

method parses the section attribute and maps it to a native BPF map type:

<code>std::string section_attr = string(A-&gt;getName()), pinned;
int bpf_map_type = BPF_MAP_TYPE_UNSPEC;
if (section_attr == "maps/hash") {
    map_type = BPF_MAP_TYPE_HASH;
} else if (section_attr == "maps/array") {
    map_type = BPF_MAP_TYPE_ARRAY;
} else if (section_attr == "maps/percpu_hash") {
    map_type = BPF_MAP_TYPE_PERCPU_HASH;
} // ... more mapping rules
</code>

5. Step‑by‑step example

Assume we declare a BCC map:

<code>BPF_HASH(process_map, u32, struct proc_info)
</code>

The macro expansion proceeds as follows:

<code>BPF_HASHX(process_map, u32, struct proc_info, BPF_HASH4, BPF_HASH3, BPF_HASH2, BPF_HASH1)(process_map, u32, struct proc_info)
// BPF_HASHX selects BPF_HASH3 (three‑argument version)
BPF_HASH3(process_map, u32, struct proc_info)
// Expands to BPF_TABLE("hash", u32, struct proc_info, process_map, 10240)
BPF_F_TABLE("hash", u32, struct proc_info, process_map, 10240, 0)
// Final structure placed in the "maps/hash" section
struct process_map_table_t {
    u32 key;
    struct proc_info leaf;
    // function pointers for lookup, update, etc.
    u32 max_entries;
    int flags;
};
__attribute__((section("maps/hash")))
struct process_map_table_t process_map = { .flags = 0, .max_entries = 10240 };
BPF_ANNOTATE_KV_PAIR(process_map, u32, struct proc_info);
</code>

The following diagram shows the overall compilation and loading flow for BCC‑generated maps:

BCC compilation flow
BCC compilation flow

For comparison, the native kernel BPF map workflow is illustrated below:

Kernel BPF map flow
Kernel BPF map flow
cloud-nativeobservabilityeBPFLinux KernelBPF mapsBCC
Big Data Technology Tribe
Written by

Big Data Technology Tribe

Focused on computer science and cutting‑edge tech, we distill complex knowledge into clear, actionable insights. We track tech evolution, share industry trends and deep analysis, helping you keep learning, boost your technical edge, and ride the digital wave forward.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.