When to Use RCU Locks for High‑Read Multi‑Core Embedded Kernels
The article compares spin_lock, rwlock, seqlock and RCU for protecting a shared kernel resource in a multi‑core embedded system, explains how RCU enables lock‑free reads, discusses the challenges of safe deletion, and outlines the RCU read‑copy‑update workflow.
Lock options for a shared kernel resource
In a multi‑core embedded kernel we have a resource accessed by many normal processes/interrupts and occasionally modified. The article lists four kernel lock mechanisms and their effects:
spin_lock : mutual exclusion for all accesses; simple to use.
rwlock : readers can run in parallel; writers are exclusive; readers and writers exclude each other; requires careful classification of readers/writers.
seqlock : same effect as rwlock but writer‑friendly; also needs careful classification.
RCU : readers run in parallel without any synchronization cost; writers are exclusive and must trade space for time; RCU protects pointer objects.
For low‑performance requirements spin_lock or rwlock are sufficient, but for read‑heavy workloads RCU gives better performance despite higher complexity.
Lock‑free read example
Consider a growing “snake‑list” where a kernel thread continuously adds entries and other threads only read the list. The article shows three possible approaches:
Use spin_lock if read/write requirements are modest.
Use rwlock when reads are frequent and writes are slow.
Observe that the lock may be unnecessary because pointer writes are atomic on modern CPUs. By inserting a memory barrier after prev->next = new (as done in __list_add_rcu with rcu_assign_pointer) the list can be updated without a lock.
Deletion challenges
When a writer also deletes entries, simply updating prev->next atomically is not enough because the removed entry must be reclaimed safely. Two problems arise:
Atomic pointer update (already solved by the add case).
Management of the deleted resource.
The kernel commonly uses reference counting (kref) to track usage of an object; the object is released when the count drops to zero. Maintaining such a scheme is complex.
RCU offers a simpler solution: after removal, the writer waits for a grace period during which all CPUs have scheduled at least once, guaranteeing that no reader can still hold a reference to the old pointer. The steps are:
Replace the spin/read lock that marks the critical section with preempt_disable, ensuring the reader cannot be pre‑empted inside the RCU read side.
The writer calls synchronize_rcu (or call_rcu) and sleeps until the grace period ends.
The kernel’s timer interrupt periodically checks whether every CPU has performed a context switch; once this condition holds, the writer may safely release the old entry.
Thus RCU is reader‑friendly (no overhead) but imposes extra cost on writers.
RCU basics
Updating a list entry follows the Read‑Copy‑Update pattern: copy the entry, modify the copy, replace the original pointer, then reclaim the old entry. The update is split into two phases—Removal and Reclamation. Removal uses atomic pointer stores to achieve lock‑free deletion; Reclamation is deferred until the grace period ends.
RCU provides rcu_read_lock / rcu_read_unlock to delimit read critical sections and synchronize_rcu / call_rcu to wait for or schedule reclamation. A diagram (included) shows multiple readers on different CPUs executing inside RCU read sections while a writer performs removal and then waits for the grace period.
The article notes that the full implementation details are complex and will be explored later.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Linux Code Review Hub
A professional Linux technology community and learning platform covering the kernel, memory management, process management, file system and I/O, performance tuning, device drivers, virtualization, and cloud computing.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
