Operations 9 min read

eBPF Tutorial 36: Tracing Nginx Requests with bpftrace

This tutorial shows how to use eBPF, bpftrace, and the funclatency tool to instrument key Nginx functions, measure their execution latency, analyze the distribution of request processing times, and identify performance bottlenecks for optimization.

Linux Kernel Journey
Linux Kernel Journey
Linux Kernel Journey
eBPF Tutorial 36: Tracing Nginx Requests with bpftrace

Introduction

Nginx is a widely used web server and reverse proxy known for high performance, stability, and low resource consumption. Monitoring and optimizing Nginx under heavy load is essential. eBPF (extended Berkeley Packet Filter) enables deep insight into Nginx performance without modifying source code or restarting the service.

Background

Nginx

Nginx uses an event‑driven architecture to handle thousands of concurrent connections with minimal resources. Its performance depends on several critical functions involved in request handling, response generation, and event processing.

eBPF

eBPF programs run in a secure sandbox inside the Linux kernel and can attach to system calls, tracepoints, and uprobes (user‑space probes). This makes eBPF a powerful observability tool for collecting detailed performance data in real time, especially for measuring function execution latency.

Uprobes

Uprobes trace user‑space application functions by attaching to entry and exit points, capturing precise timing information. Because kernel‑mode eBPF can add overhead, a user‑mode eBPF runtime such as bpftime (LLVM JIT/AOT based) can be used to reduce impact.

Nginx performance‑critical functions

ngx_http_process_request : handles the start of an incoming HTTP request.

ngx_http_upstream_send_request : sends a request to an upstream server when Nginx acts as a reverse proxy.

ngx_http_finalize_request : finalizes request processing and sends the response.

ngx_event_process_posted : processes queued events in the event loop.

ngx_handle_read_event : handles read events from sockets, crucial for network I/O performance.

ngx_writev_chain : writes the response back to the client, typically used with the write event loop.

Tracing Nginx functions with bpftrace

The following bpftrace script records the start time of each function and prints the elapsed time when the function returns.

#!/usr/sbin/bpftrace

// Trace the start of HTTP request processing
uprobe:/usr/sbin/nginx:ngx_http_process_request
{
    printf("HTTP request start (tid: %d)
", tid);
    @start[tid] = nsecs;
}

// Trace the end of HTTP request processing
uretprobe:/usr/sbin/nginx:ngx_http_finalize_request
/@start[tid]/
{
    $elapsed = nsecs - @start[tid];
    printf("HTTP request latency: %d ns (tid: %d)
", $elapsed, tid);
    delete(@start[tid]);
}

// Trace the start of upstream request sending
uprobe:/usr/sbin/nginx:ngx_http_upstream_send_request
{
    printf("Upstream request start (tid: %d)
", tid);
    @up_start[tid] = nsecs;
}

// Trace the end of upstream request sending
uretprobe:/usr/sbin/nginx:ngx_http_upstream_send_request
/@up_start[tid]/
{
    $elapsed = nsecs - @up_start[tid];
    printf("Upstream request latency: %d ns (tid: %d)
", $elapsed, tid);
    delete(@up_start[tid]);
}

// Trace the start of event processing
uprobe:/usr/sbin/nginx:ngx_event_process_posted
{
    printf("Event processing start (tid: %d)
", tid);
    @event_start[tid] = nsecs;
}

// Trace the end of event processing
uretprobe:/usr/sbin/nginx:ngx_event_process_posted
/@event_start[tid]/
{
    $elapsed = nsecs - @event_start[tid];
    printf("Event processing latency: %d ns (tid: %d)
", $elapsed, tid);
    delete(@event_start[tid]);
}

Running the script

Start Nginx, then run the script with bpftrace. Generate HTTP traffic using curl or similar tools.

# bpftrace /home/yunwei37/bpf-developer-tutorial/src/39-nginx/trace.bt
Attaching 4 probes...
事件处理开始 (tid: 1071)
事件处理时间: 166396 ns (tid: 1071)
事件处理开始 (tid: 1071)
事件处理时间: 87998 ns (tid: 1071)
HTTP 请求处理开始 (tid: 1071)
HTTP 请求处理时间: 1083969 ns (tid: 1071)
事件处理开始 (tid: 1071)
事件处理时间: 92597 ns (tid: 1071)

The script prints start and end timestamps for each traced function, allowing per‑request latency calculation and identification of slow paths.

Measuring function latency with funclatency

The funclatency tool provides a latency distribution histogram. The command below measures the latency of ngx_http_process_request:

# sudo ./funclatency /usr/sbin/nginx:ngx_http_process_request
tracing /usr/sbin/nginx:ngx_http_process_request...
tracing func ngx_http_process_request in /usr/sbin/nginx...
Tracing /usr/sbin/nginx:ngx_http_process_request.  Hit Ctrl-C to exit
^C
      nsec                : count    distribution
        0 -> 1            : 0        |
   524288 -> 1048575    : 16546    |****************************************|
  1048576 -> 2097151    : 2296     |*****                           |
  2097152 -> 4194303    : 1264     |***                             |
  4194304 -> 8388607    : 293      |
  8388608 -> 16777215   : 37       |
Exiting trace of /usr/sbin/nginx:ngx_http_process_request

Result summary

Most requests complete within 524,288 – 1,048,575 ns, while a small fraction take longer. This distribution helps pinpoint performance bottlenecks and guides optimization efforts.

Conclusion

eBPF tracing of Nginx requests with bpftrace and funclatency provides concrete latency measurements, enables bottleneck detection, and supports data‑driven optimization of Nginx deployments.

Repository with examples and the funclatency binary: https://github.com/eunomia-bpf/bpf-developer-tutorial

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

Performance MonitoringeBPFNginxLinux tracingbpftracefunclatency
Linux Kernel Journey
Written by

Linux Kernel Journey

Linux Kernel Journey

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.