Mobile Development 24 min read

Why YYCache Excels: Deep Dive into iOS Cache Design & Performance

This article examines the YYCache iOS library, covering its multi‑level architecture, LRU implementation, thread‑safe APIs, eviction policies, lock choices, disk‑storage strategies, Swift integration, performance benchmarks, and practical takeaways for building high‑performance mobile caches.

WeDoctor Frontend Technology
WeDoctor Frontend Technology
WeDoctor Frontend Technology
Why YYCache Excels: Deep Dive into iOS Cache Design & Performance

Why I Looked at YYCache Source

Reading YYCache, a five‑year‑old Objective‑C cache library, was prompted by a mention of its LRU algorithm on a developer forum and curiosity about how a well‑maintained open‑source cache works in practice.

Cache Design

Multi‑level cache, reasonable entry point

Effective caches combine fast in‑memory storage with persistent disk storage.

Memory cache : Directly communicates with the CPU, lives only while the process runs.

Disk cache : Slower but persists across process lifetimes.

Public APIs should be simple, support custom model types via NSCoding, provide both synchronous and asynchronous methods, and expose minimal extra functions for custom storage paths.

Simple get/set API

Support non‑system data types / custom models

Support sync and async

Minimal extra APIs for custom storage paths

YYCache stores custom models using the NSCoding protocol.

- (void)setObject:(nullable id<NSCoding>)object forKey:(NSString *)key;<br/>- (void)setObject:(nullable id<NSCoding>)object forKey:(NSString *)key withBlock:(nullable void(^)(void))block;<br/><br/>- (nullable id<NSCoding>)objectForKey:(NSString *)key;<br/>- (void)objectForKey:(NSString *)key withBlock:(nullable void(^)(NSString *key, id<NSCoding> object))block;

In Swift the subscript operator offers a cleaner syntax.

subscript(key: String) -> NSCoding? {<br/>    get {<br/>        if let returnValue = object(forKey: key) as? NSCoding {<br/>            return returnValue<br/>        }<br/>        return nil<br/>    }<br/>    set {<br/>        if let newValue = newValue {<br/>            set(object: newValue, forKey: key)<br/>        } else {<br/>            removeObject(forKey: key)<br/>        }<br/>    }<br/>}

Safe and accurate data access

Thread‑safety can be achieved by:

Placing operations on a concurrent queue and using dispatch_barrier_async to serialize writes.

Running operations on a serial queue.

Locking every operation.

The first two approaches add scheduling overhead, while the third requires careful lock selection.

Reasonable cache eviction strategies

Typical policies include LFU, LRU, ARC (combined LFU+LRU), FIFO, MRU, LRU‑K, and 2Q. Mobile devices have limited cache size, so LRU is often sufficient; YYCache indeed uses LRU.

Excellent performance

Cache operations aim for O(1) time (or O(log N) at worst) and low memory peaks. Disk reads are the biggest bottleneck; YYCache switches between SQLite and file storage based on data size and may employ mmap for large blobs.

Multi‑level cache + data sync

Classic O(1) LRU implementation

Use pthread_mutex instead of thread‑switch overhead

Store small data in SQLite, large data as files

Low‑priority queue for object release

Use __unsafe_unretained and direct variable access to reduce overhead

Cache compiled sqlite3_stmt objects

Direct memory address access instead of getter/setter

YYCache Design and Implementation

O(1) and LRU implementation

The algorithm uses a doubly‑linked list together with a hash table. Insertion moves a node to the head; lookup retrieves the node from the hash table and moves it to the head; eviction removes the tail node.

@interface _YYLinkedMapNode : NSObject {<br/>    __unsafe_unretained _YYLinkedMapNode *_prev;<br/>    __unsafe_unretained _YYLinkedMapNode *_next;<br/>    id _key;<br/>    id _value;<br/>    NSUInteger _cost;<br/>    NSTimeInterval _time;<br/>}<br/><br/>@interface _YYLinkedMap : NSObject {<br/>    CFMutableDictionaryRef _dic;<br/>    _YYLinkedMapNode *_head; // MRU<br/>    _YYLinkedMapNode *_tail; // LRU<br/>}

Lock choice

Memory cache originally used OSSpinLock (fast but can cause priority inversion). Disk cache used dispatch_semaphore. Apple later deprecated OSSpinLock; modern code prefers os_unfair_lock or pthread_mutex.

OSSpinLock:                 109.16 ms<br/>os_unfair_lock:            162.40 ms<br/>dispatch_semaphore:         140.14 ms<br/>pthread_mutex:              222.66 ms<br/>NSCondition:                221.56 ms<br/>NSLock:                     244.92 ms<br/>pthread_mutex(recursive):   365.52 ms<br/>NSRecursiveLock:            437.68 ms<br/>NSConditionLock:            784.76 ms<br/>@synchronized:              1087.38 ms

Atomic properties guarantee indivisible reads/writes but not ordering; explicit locks are needed for correct logical sequences.

@property (atomic) int temp;<br/>- (void)atomicPropertyTest {<br/>    dispatch_queue_t queue = dispatch_get_global_queue(0,0);<br/>    for (int i=0;i<20;i++) {<br/>        dispatch_async(queue, ^{<br/>            NSLog(@"before %d, %d", i, self.temp);<br/>            self.temp = self.temp + 1;<br/>            NSLog(@"after %d, %d", i, self.temp);<br/>        });<br/>    }<br/>}

Disk cache design

Data smaller than 20 KB is stored in SQLite; larger blobs are written to files while an SQLite table keeps an index. The library caches sqlite3_stmt objects to speed up repeated queries.

Other implementation details

Use __unsafe_unretained for linked‑list pointers to avoid the overhead of weak references.

Release objects on a low‑priority queue to keep the UI thread responsive.

Determine main‑thread execution via pthread_main_np() or by comparing the current queue label with the main queue label.

Cache in Swift

YYCache is no longer maintained. Modern Swift alternatives include Track, Cache (Codable support), and Haneke/HanekeSwift. Benchmarks on iPhone X (iOS 14) show YYCache still performs well, but HanekeSwift offers a more ergonomic API.

Takeaways

A good cache should expose a tiny, consistent API and support both sync and async usage.

O(1) LRU via hash‑table + doubly‑linked list is the classic solution.

Lock choice matters; modern iOS prefers os_unfair_lock or pthread_mutex.

Hybrid disk storage (SQLite for small blobs, files for large blobs) balances speed and space.

Performance testing should include multithreaded workloads, memory/CPU peaks, and actual disk usage.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

PerformancecacheiOSConcurrencySwiftLRU
WeDoctor Frontend Technology
Written by

WeDoctor Frontend Technology

Official WeDoctor Group frontend public account, sharing original tech articles, events, job postings, and occasional daily updates from our tech team.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.