Boost iOS Multithreading Efficiency with Reader‑Writer Locks and GCD Barriers
This article explains how iOS developers can improve multithreaded performance by understanding the readers‑writers problem, using GCD barrier APIs or pthread_rwlock_t, and applying proper lock granularity and recursive locks to achieve safe and efficient concurrent reads and writes.
In most iOS projects, developers rely on GCD, @synchronized, or NSLock for multithreading, but these approaches may not guarantee thread safety or fully exploit concurrency. This guide examines common pitfalls and presents the readers‑writers problem as a framework for improving efficiency.
Readers‑writers problem
The problem involves two groups of threads sharing data: multiple readers can access the data simultaneously without side effects, while a writer must have exclusive access, blocking all other readers and writers until it finishes.
Allow multiple readers to read concurrently.
Allow only one writer to write.
A writer must wait until all readers and other writers have finished.
When a writer starts, existing readers and writers must exit first.
Many client‑side apps fetch data from the network, process it, and display lists, which naturally fits the readers‑writers pattern.
Below is a simple cache implementation using a mutex lock:
- (void)setCache:(id)cacheObject forKey:(NSString *)key {
if (key.length == 0) { return; }
[_cacheLock lock];
self.cacheDic[key] = cacheObject;
...
[_cacheLock unlock];
}
- (id)cacheForKey:(NSString *)key {
if (key.length == 0) { return nil; }
[_cacheLock lock];
id cacheObject = self.cacheDic[key];
...
[_cacheLock unlock];
return cacheObject;
}This approach ensures safety but also serializes read operations, wasting CPU cycles.
A classic readers‑writers solution uses semaphores:
semaphore ReaderWriterMutex = 1; // mutual exclusion for read/write
int Rcount = 0; // number of readers
semaphore CountMutex = 1; // protects Rcount
writer(){
while(true){
P(ReaderWriterMutex);
write;
V(ReaderWriterMutex);
}
}
reader(){
while(true){
P(CountMutex);
if(Rcount == 0) P(ReaderWriterMutex); // first reader blocks writers
++Rcount;
V(CountMutex);
read;
P(CountMutex);
--Rcount;
if(Rcount == 0) V(ReaderWriterMutex); // last reader releases writer
V(CountMutex);
}
}On iOS, the semaphore primitives can be replaced with GCD's dispatch_semaphore_t, but managing a reader count manually is cumbersome. Fortunately, iOS provides ready‑made readers‑writers locks:
pthread_rwlock_t
Usage example:
var lock = pthread_rwlock_t()
pthread_rwlock_init(&lock, nil)
// Read section
pthread_rwlock_rdlock(&lock)
// read shared resource
pthread_rwlock_unlock(&lock)
// Write section
pthread_rwlock_wrlock(&lock)
// write shared resource
pthread_rwlock_unlock(&lock)
pthread_rwlock_destroy(&lock)While functional, pthread_rwlock_t is error‑prone and requires explicit destruction. A more convenient alternative on iOS is the GCD barrier.
GCD barrier
Although not designed specifically for readers‑writers problems, a barrier can act as a writer lock: all tasks submitted before the barrier must finish, and tasks after the barrier wait until it completes. The diagram below illustrates this behavior:
Rewriting the earlier cache example with a concurrent queue and a barrier yields:
// Using ordinary lock (for reference)
- (void)setCache:(id)cacheObject forKey:(NSString *)key {
if (key.length == 0) { return; }
[_cacheLock lock];
self.cacheDic[key] = cacheObject;
...
[_cacheLock unlock];
}
- (id)cacheForKey:(NSString *)key {
if (key.length == 0) { return nil; }
[_cacheLock lock];
id cacheObject = self.cacheDic[key];
...
[_cacheLock unlock];
return cacheObject;
}
// Using GCD barrier (readers‑writers lock)
static dispatch_queue_t queue = dispatch_queue_create("com.gfzq.testQueue", DISPATCH_QUEUE_CONCURRENT);
- (void)setCache:(id)cacheObject forKey:(NSString *)key {
if (key.length == 0) { return; }
dispatch_barrier_async(queue, ^{ self.cacheDic[key] = cacheObject; ... });
}
- (id)cacheForKey:(NSString *)key {
if (key.length == 0) { return nil; }
__block id cacheObject = nil;
dispatch_sync(queue, ^{ cacheObject = self.cacheDic[key]; ... });
return cacheObject;
}This version allows concurrent reads while isolating writes, improving both safety and performance.
Atomic properties with custom getters/setters can also benefit from barriers:
@property (atomic, copy) NSString *someString;
- (NSString *)someString {
__block NSString *tempString;
dispatch_sync(_syncQueue, ^{ tempString = _someString; });
return tempString;
}
- (void)setSomeString:(NSString *)someString {
dispatch_barrier_async(_syncQueue, ^{ _someString = someString; ... });
}Experiments (see figures) show that readers‑writers locks (both GCD barrier and pthread_rwlock) significantly outperform plain NSLock, especially when readers outnumber writers. The performance gap between GCD barrier and pthread_rwlock is minimal, so the barrier is recommended for its simplicity and safety.
Key takeaways:
Readers‑writers locks (GCD barrier or pthread_rwlock) provide noticeable speed gains over simple locks.
The advantage grows with more readers and fewer writers.
GCD barrier is preferred for iOS due to ease of use and comparable performance.
When deciding whether to use a readers‑writers lock, ensure the workload matches two conditions: (1) reads do not perform writes, and (2) the number of readers greatly exceeds the number of writers.
Lock granularity
Too small a granularity can leave windows where data changes after a lock is released but before dependent operations run, leading to race conditions. Expanding the critical section to include all dependent statements (e.g., the NSLog after setting an atomic property) resolves this.
Conversely, overly large granularity—such as wrapping unrelated work in a lock or using @synchronized(self) everywhere—serializes independent tasks and can cause deadlocks. Using distinct lock objects for unrelated sections and avoiding unnecessary work inside critical sections improves concurrency.
Recursive lock
Calling a locked method from another locked method on the same thread causes a deadlock. Replacing NSLock with NSRecursiveLock (or a recursive pthread_mutex) prevents this:
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);
pthread_mutex_init(&_lock, &attr);
pthread_mutexattr_destroy(&attr);Note that @synchronized internally uses a recursive lock.
Conclusion
Writing efficient, safe multithreaded iOS code requires more than just knowing GCD, @synchronized, and NSLock; developers must understand underlying synchronization concepts, choose appropriate lock granularity, and apply readers‑writers locks or recursive locks where they fit.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
