Master Spinlock: Understand Linux Kernel Synchronization and Avoid Deadlocks
This article explains Linux kernel spinlocks—from basic concepts and atomic operations to memory barriers, busy‑waiting, and proper usage—illustrating common pitfalls like deadlocks, priority inversion, and recursion, and provides practical guidelines, code examples, and debugging tools to help developers implement safe, efficient synchronization.
What Is a Spinlock?
A spinlock is a low‑overhead lock used in the Linux kernel where a thread repeatedly checks a lock variable until it becomes free. Unlike ordinary mutexes, it never puts the thread to sleep, making it ideal for short critical sections in interrupt handlers, SMP schedulers, or high‑frequency data access.
How Spinlocks Work
1. Acquiring the Lock
The thread first reads the lock flag. If the flag is 0 (unlocked), an atomic operation such as xchg (on x86) or compare_exchange_weak (C++ std::atomic) sets the flag to 1 and the thread enters the critical section.
mov eax, 1 ; load 1 into eax
xchg [lock_addr], eax ; atomically swap lock value with eax
cmp eax, 0 ; if previous value was 0, lock acquired
je acquired
spin:
mov eax, 1
xchg [lock_addr], eax
cmp eax, 0
je acquired
jmp spin
acquired:
; critical section starts here2. Spin‑Waiting
If the lock is already held, the thread stays in a tight loop, repeatedly executing the atomic test‑and‑set until the lock becomes free. This busy‑waiting consumes CPU cycles but avoids the context‑switch overhead of sleeping locks.
3. Releasing the Lock
When the critical section finishes, the lock flag is cleared with another atomic store, allowing waiting threads to acquire it.
mov [lock_addr], 0 ; release the lockImplementation Details in the Kernel
The Linux kernel implements spinlocks with a four‑part framework: a lock structure ( spinlock_t), initialization, lock/unlock operations, and memory barriers to prevent instruction reordering.
typedef struct {
atomic_t lock; // 0 = unlocked, 1 = locked
} spinlock_t;
static inline void spin_lock_init(spinlock_t *lock) {
atomic_set(&lock->lock, 0);
}
static inline void spin_lock(spinlock_t *lock) {
while (atomic_xchg(&lock->lock, 1) == 1)
; // busy‑wait
smp_mb(); // acquire barrier
}
static inline void spin_unlock(spinlock_t *lock) {
smp_mb(); // release barrier
atomic_set(&lock->lock, 0);
}The smp_mb() calls act as lightweight memory barriers: acquire prevents later reads from moving before the lock acquisition, and release prevents earlier writes from moving after the unlock.
Common Pitfalls and Deadlock Scenarios
Non‑reentrancy: A spinlock cannot be acquired recursively. A recursive function that locks the same spinlock will spin forever, causing deadlock.
Priority inversion: A low‑priority thread holding a spinlock can block a high‑priority thread, and a medium‑priority thread may pre‑empt the low‑priority one, extending the wait.
Interrupt context misuse: If an interrupt handler tries to acquire a spinlock already held by the interrupted thread, both will wait forever.
Long critical sections: Spinlocks are meant for very short sections; prolonged holding leads to wasted CPU cycles and possible starvation.
Deadlock Avoidance Techniques
Keep critical sections tiny (e.g., a few instructions or a simple counter update).
Never acquire a spinlock while sleeping or performing I/O.
Maintain a consistent lock acquisition order across the code base.
Use kernel tools such as Lockdep to detect lock order violations and self‑deadlocks.
When needed, disable local interrupts with spin_lock_irqsave() and restore them with spin_unlock_irqrestore() to prevent interrupt‑induced deadlocks.
Practical Kernel API Usage
The kernel provides several helper functions: spin_lock_init(&my_lock) – initialize a spinlock. spin_lock(&my_lock) – acquire the lock (busy‑wait if necessary). spin_unlock(&my_lock) – release the lock. spin_lock_irq(&my_lock) / spin_unlock_irq(&my_lock) – acquire/release while disabling local interrupts. spin_lock_irqsave(&my_lock, flags) / spin_unlock_irqrestore(&my_lock, flags) – save interrupt state, disable interrupts, and later restore it.
Read‑write spinlocks ( rwlock_t) allow multiple concurrent readers and an exclusive writer.
Example: Protecting a Kernel Linked List
#include <linux/module.h>
#include <linux/list.h>
#include <linux/spinlock.h>
struct my_node {
struct list_head list;
int data;
};
static LIST_HEAD(global_list);
static spinlock_t list_lock;
static int __init my_module_init(void)
{
spin_lock_init(&list_lock);
return 0;
}
void list_add_node(struct my_node *node)
{
spin_lock(&list_lock);
list_add(&node->list, &global_list);
spin_unlock(&list_lock);
}
void list_traverse(void)
{
struct my_node *node;
spin_lock(&list_lock);
list_for_each_entry(node, &global_list, list)
printk(KERN_INFO "Node data: %d
", node->data);
spin_unlock(&list_lock);
}
static void __exit my_module_exit(void)
{
struct my_node *node, *next;
spin_lock(&list_lock);
list_for_each_entry_safe(node, next, &global_list, list) {
list_del(&node->list);
kfree(node);
}
spin_unlock(&list_lock);
}
module_init(my_module_init);
module_exit(my_module_exit);
MODULE_LICENSE("GPL");This module demonstrates the full lifecycle: initializing the lock, protecting list modifications, and safely cleaning up while holding the lock.
Debugging with Lockdep
When the kernel is built with CONFIG_PROVE_LOCKING=y, Lockdep tracks lock dependencies and can report self‑deadlocks, AB‑BA cycles, and lock order violations. Example output shows a deadlock caused by calling get_task_comm() while already holding a lock; the fix is to release the lock before the call and reacquire it afterward.
By following the concepts, code patterns, and safety rules presented above, developers can confidently use spinlocks to achieve low‑latency synchronization in the Linux kernel while avoiding the classic pitfalls that lead to deadlocks and performance degradation.
Deepin Linux
Research areas: Windows & Linux platforms, C/C++ backend development, embedded systems and Linux kernel, etc.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
