Getting Started with GPU Kernel Virtualization: Building a Simple Linux Module
This tutorial walks through the motivation for Nvidia GPU kernel interception, explains Linux kernel module basics and privilege rings, shows how to set up an Ubuntu environment, write and compile a minimal LKM, load and test it, then create a fake GPU character device and mount it into a Docker container for verification.
Motivation
To develop an Nvidia GPU kernel interception/virtualization feature, the author learned Linux kernel module development.
Linux Kernel Modules and Privilege Rings
Kernel modules are compiled binaries inserted into the kernel and run in Ring 0 (kernel mode) on x86‑64, providing unrestricted access and high performance. Intel x86 defines four privilege levels: Ring 0 (kernel), Ring 1 and Ring 2 (typically drivers), and Ring 3 (user space).
Preparation
Use an Ubuntu Linux machine (physical or virtual; a virtual machine is preferred for module development).
Ensure basic C programming knowledge.
Install development tools and kernel headers.
$ docker run -d ubuntu:22.04 sleep 100000000000 $ apt-get install build-essential linux-headers-$(uname -r)First Kernel Module
Create a working directory and source file lkm_example.c:
$ mkdir -p ~/src/lkm_example $ cd ~/src/lkm_exampleContent of lkm_example.c:
#include <linux/init.h>
#include <linux/module.h>
#include <linux/kernel.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("RongFu.Leng");
MODULE_DESCRIPTION("A simple example Linux module.");
MODULE_VERSION("0.0.1");
static int __init lkm_example_init(void) {
printk(KERN_INFO "Hello, World!
");
return 0;
}
static void __exit lkm_example_exit(void) {
printk(KERN_INFO "Goodbye, World!
");
}
module_init(lkm_example_init);
module_exit(lkm_example_exit); #includedirectives pull in required kernel headers. MODULE_LICENSE can be set to various values; the full list is available via
grep "MODULE_LICENSE" -B 27 /usr/src/linux-headers-$(uname -r)/include/linux/module.h.
Init and exit functions are static, returning int and void respectively. printk logs to the kernel ring buffer; KERN_INFO sets the log priority. module_init and module_exit register the load and unload callbacks.
Makefile
obj-m += lkm_example.o
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules
clean:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) cleanRun make to produce lkm_example.ko.
Loading and Testing
$ apt-get install -y kmod
$ insmod lkm_example.ko
$ dmesg
$ lsmod | grep lkm_
$ rmmod lkm_exampleThe kernel log shows “Hello, World!” after insertion and “Goodbye, World!” after removal.
FakeGPU Character Device
Implement a minimal framework that registers a FakeGPU character device and mounts it into a Docker container at /dev/nvidia. Define the device operations:
static struct file_operations file_ops = {
.read = device_read,
.write = device_write,
.open = device_open,
.release = device_release,
.poll = fake_gpu_km_poll,
.unlocked_ioctl = fake_gpu_km_unlocked_ioctl,
.compat_ioctl = fake_gpu_km_compat_ioctl,
.mmap = fake_gpu_km_mmap,
};Register the character device in module_init:
static int __init fake_gpu_init(void) {
/* Try to register character device */
major_num = register_chrdev(0, "fake_gpu", &file_ops);
return 0;
}Build and run:
# On Ubuntu 22.04
$ make all # compile the module
$ make test # load the module
$ mknod /dev/fake_gpu0 c 511 0 # create device node (511 is the returned major number)Mount the device into a Docker container:
$ docker run -ti -e NVIDIA_VISIBLE_DEVICES=none \
--device /dev/fake_gpu0:/dev/nvidia0 \
--device /dev/nvidiactl:/dev/nvidiactl \
--device /dev/nvidia-uvm:/dev/nvidia-uvm \
nvidia/cuda:12.2.2-cudnn8-devel-ubuntu20.04 /bin/bashInside the container, nvidia-smi reports “No devices were found”. The author suspects the mismatch of the major number (real Nvidia devices use major 195) prevents the container from recognizing the fake device.
Full demo code is available at:
https://github.com/lengrongfu/study-demo/blob/main/gpu/fakegpu/README.md
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Infra Learning Club
Infra Learning Club shares study notes, cutting-edge technology, and career discussions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
