Performance Comparison of User‑Kernel Communication Mechanisms: ioctl, proc, and Netlink
This article examines three Linux user‑kernel communication methods—ioctl, proc, and Netlink—by describing their principles, presenting experimental setups and code, measuring nanosecond‑level call latency, and offering guidance on selecting the most suitable mechanism for a project.
The article introduces the need for efficient user‑kernel data exchange in Linux and notes the lack of publicly available latency data for common mechanisms, motivating a comparative study of ioctl, proc, and Netlink.
It explains each mechanism: ioctl provides a system‑call interface for device control, proc enables one‑way data transfer via virtual files under /proc , and Netlink uses socket‑based bidirectional messaging, allowing both client‑server and server‑client roles.
Experimental hardware consists of a VMware Workstation 15 Pro VM running Ubuntu 20.04 on an Intel i7‑6700 (8 cores) with 8 GB RAM. The test scenario focuses on single system‑call latency (ioctl and sendto) measured over 5,000 iterations, each transmitting 1 024 bytes.
Test code for the ioctl benchmark ( ioctltest ) is:
unsigned char msg[1024];
int rc;
struct timespec ts_start, ts_end;
rc = clock_gettime(CLOCK_MONOTONIC, &ts_start);
int i = 0;
for (i = 0; i < 5000; i++) {
ret = ioctl(fd, IOCWREG, &msg);
if (ret) { perror("ioctl write:"); exit(-4); }
}
rc = clock_gettime(CLOCK_MONOTONIC, &ts_end);
printf("CLOCK_MONOTONIC reports %ld.%09ld seconds\n", ts_end.tv_sec - ts_start.tv_sec, ts_end.tv_nsec - ts_start.tv_nsec);The Netlink benchmark ( netlinktest ) uses:
unsigned char msg[1024];
int rc;
struct timespec ts_start, ts_end;
rc = clock_gettime(CLOCK_MONOTONIC, &ts_start);
int i = 0;
for (i = 0; i < 5000; i++) {
ret = sendto(skfd, nlh, nlh->nlmsg_len, 0, (struct sockaddr *)&daddr, sizeof(struct sockaddr_nl));
if (!ret) { perror("sendto error\n"); close(skfd); exit(-1); }
}
rc = clock_gettime(CLOCK_MONOTONIC, &ts_end);
printf("CLOCK_MONOTONIC reports %ld.%09ld seconds\n", ts_end.tv_sec - ts_start.tv_sec, ts_end.tv_nsec - ts_start.tv_nsec);Simple shell scripts repeatedly clear caches, run each test, and pause one second between runs.
Measured results show average per‑call times of roughly 0.00127 ms for ioctl and 0.00094 ms for Netlink, indicating only a slight advantage for Netlink. Both methods are fast enough that latency should not dominate design decisions.
Based on the findings, the article recommends using ioctl when the project relies on file‑system or driver‑based interactions and transfers small amounts of data, while Netlink is preferable for asynchronous communication, multicast needs, or when the kernel should initiate sessions.
In conclusion, the performance gap between ioctl and Netlink is minimal; selection should prioritize architectural fit and feature requirements rather than raw latency.
Coolpad Technology Team
Committed to advancing technology and supporting innovators. The Coolpad Technology Team regularly shares forward‑looking insights, product updates, and tech news. Tech experts are welcome to join; everyone is invited to follow us.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.