An Overview of Linux Direct Rendering Manager (DRM): History, Architecture, and Simple Example
Linux’s Direct Rendering Manager (DRM) replaces the legacy framebuffer by providing a kernel subsystem that unifies GPU and display control through libdrm, KMS, and GEM, tracing its evolution from 1999 DRI origins to atomic APIs, and illustrating usage via a step‑by‑step modeset example.
Traditional Linux display driver development relied on the framebuffer (FB) architecture. As graphics hardware evolved—supporting overlay menus, GPU acceleration, hardware cursors—the FB model could no longer meet the needs for multi‑application access control and advanced features. Direct Rendering Manager (DRM) was created to address these limitations.
DRM is the kernel subsystem that manages interaction with graphics cards. User‑space programs can use DRM’s APIs to perform 3D rendering, video decoding, and GPU compute tasks.
1.1 DRM Development History
1999: Precision Insight developed the DRI framework for XFree86 4.0 to better support 3DFX graphics cards, producing the first DRM code.
October 2008: Linux kernel 2.6.27 reorganized DRM source code under /drivers/gpu/drm/ , placing each vendor’s driver in its own subdirectory.
June 2014: The Atomic API was added in Linux 3.16, and many drivers migrated to the new API.
2018: Ten additional drivers based on the atomic framework were merged into the kernel.
1.2 Advantages of DRM over FB Architecture
Native support for multi‑layer composition (FB does not).
Support for VSYNC, DMA‑BUF, asynchronous updates, and fence mechanisms (absent in FB).
Unified management of GPU and display drivers, simplifying software upgrades, maintenance, and management.
1.3 DRM Graphics Display Framework
Each detected GPU becomes a DRM device, represented by a device file such as /dev/dri/cardX . The framework consists of three main parts:
libdrm (interface library) : Wraps low‑level IOCTLs into reusable, language‑agnostic APIs.
KMS (Kernel Mode Setting) : Configures display modes, updates frame buffers, composes multiple layers, and sets parameters like resolution, refresh rate, and power state.
GEM (Graphics Execution Manager) : Manages graphics memory, handling allocation and release of display buffers.
Figure 1.1: Overview of the DRM graphics display architecture (source: Wikipedia).
1.4 Elements Involved in the DRM Framework
The following diagram shows the flow from an application calling DRM to the final screen output.
Figure 1.2: DRM framework flowchart.
2 DRM Driver Framework
2.1 DRM Driver Objects
DRM objects form the core of the framework. In the diagram, blue areas represent hardware abstractions, while brown areas represent software abstractions. The GEM structure is defined by drm_gem_object , and other objects reside in drm_mode_object . Note that drm_panel is not an object but a collection of callbacks to decouple LCD drivers from encoder drivers.
Figure 2.1: Core DRM components.
2.2 How Abstract Hardware Relates to DRM Objects
Understanding DRM objects is straightforward; the challenge lies in linking them to actual hardware. The following example uses a MIPI DSI interface to illustrate the correspondence between hardware and DRM objects.
Figure 2.2: Typical MIPI DSI hardware connection.
Figure 2.3: Mapping between hardware and DRM objects.
Component explanations are provided in the accompanying diagram.
3 Simple DRM Example
Because DRM codebase and GPU logic are extensive, practical experimentation is essential for comprehension. The following example walks through a mode‑setting (modeset) workflow.
Figure 3.1: DRM modeset process overview.
3.1 Open DRM Device File
When the DRM framework loads, it creates a device node such as /dev/dri/card0 . User‑space applications open this node to access GPU functionality.
3.2 Acquire GPU Resource Handles
After opening the device, applications call DRM APIs to retrieve resource handles for further operations.
3.3 Get Connector ID
From the drmModeRes structure, the connector objects are enumerated to obtain a connector ID.
3.4 Create Framebuffer
A framebuffer is allocated and memory‑mapped so that pixel data can be written into it.
3.5 Set CRTC Mode
After clearing the framebuffer, the CRTC is configured using drmModeSetCrtc() , which takes parameters such as the file descriptor, CRTC handle, framebuffer handle, and X/Y coordinates.
3.6 Resource Cleanup (Optional)
After display, the GUI typically continues running, so explicit cleanup is not required in most cases.
Conclusion
This article introduced the development history, driver framework, and a simple example of the DRM architecture. Understanding DRM requires hands‑on exploration of its code paths. Future sections will delve deeper into specific components.
While DRM meets the demands of modern display hardware, legacy devices and software still rely on FB support. The DRM codebase includes compatibility layers that expose simulated FB devices (e.g., /dev/fb0 ) via drivers such as drivers/gpu/drm/xxx/drv.c .
Reference
https://www.kernel.org/doc/html/latest/gpu/index.html
Linux GPU Driver Developer’s Guide
https://www.kernel.org/doc/html/latest/gpu/drm-kms.html#kms-properties
Kernel Mode Setting (KMS)
OPPO Kernel Craftsman
Sharing Linux kernel-related cutting-edge technology, technical articles, technical news, and curated tutorials
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.