Backend Development 13 min read

Key Considerations for Implementing a Local Cache in Java

This article outlines the essential design considerations for building a local cache in Java, covering data structures, capacity limits, eviction policies, expiration handling, thread safety, blocking mechanisms, simple APIs, and optional persistence, with illustrative code examples.

Top Architect
Top Architect
Top Architect
Key Considerations for Implementing a Local Cache in Java

Preface

While reviewing MyBatis source code, I noticed its caching subsystem. MyBatis provides a first‑level cache (simple) and a more feature‑rich second‑level cache. To build a comparable local cache, we need to address several design points.

Considerations

The main aspects are how data is stored, how many entries can be kept, how excess entries are handled, expiration, thread safety, blocking, API simplicity, and persistence.

1. Data Structure

The simplest approach is to store entries in a Map . More sophisticated systems (e.g., Redis) use various structures such as hash, list, set, and sorted set, backed by linked lists, skip lists, etc.

2. Capacity Limit

Because a local cache resides in memory, we usually set a maximum number of entries (e.g., 1024). When the limit is reached, a removal strategy must be applied.

3. Eviction Strategy

Common strategies include:

LRU – Least Recently Used, often implemented with LinkedHashMap .

FIFO – First In First Out, typically using a Queue .

LFU – Least Frequently Used, based on access counts.

SOFT – Uses SoftReference and is cleared by the GC when memory is low.

WEAK – Uses WeakReference and is cleared more aggressively.

4. Expiration Time

Entries can be given a TTL. Two deletion modes are common:

Passive – Check expiration on each get/put operation, e.g.: if (System.currentTimeMillis() - lastClear > clearInterval) { clear(); }

Active – Run a background job that periodically scans and removes expired entries.

5. Thread Safety

Use thread‑safe containers such as ConcurrentHashMap or wrap non‑thread‑safe maps with synchronization. MyBatis, for example, wraps a HashMap with a SynchronizedCache :

public synchronized void putObject(Object key, Object object) {
    // ...
}

public synchronized Object getObject(Object key) {
    // ...
}

6. Simple API

A cache should expose a minimal set of operations, e.g. get , put , remove , clear , and size . MyBatis defines a Cache interface:

public interface Cache {
    String getId();
    void putObject(Object key, Object value);
    Object getObject(Object key);
    Object removeObject(Object key);
    void clear();
    int getSize();
    ReadWriteLock getReadWriteLock();
}

Guava’s Cache interface is similarly concise:

public interface Cache
{
    V getIfPresent(Object key);
    V get(K key, Callable
loader) throws ExecutionException;
    ImmutableMap
getAllPresent(Iterable
keys);
    void put(K key, V value);
    void putAll(Map
m);
    void invalidate(Object key);
    void invalidateAll(Iterable
keys);
    void invalidateAll();
    long size();
    CacheStats stats();
    ConcurrentMap
asMap();
    void cleanUp();
}

7. Persistence

Persisting cache data allows recovery after a restart. Ehcache supports disk persistence via a configuration flag:

diskPersistent="false" // whether to persist to disk

Redis provides AOF and RDB persistence mechanisms.

8. Blocking Mechanism

When a cache miss occurs, it is useful to block other threads until the value is loaded. MyBatis’s BlockingCache does this. A classic implementation can be found in "Java Concurrency in Practice":

public class Memoizer
implements Computable
{
    private final Map
> cache = new ConcurrentHashMap<>();
    private final Computable
c;

    public Memoizer(Computable
c) {
        this.c = c;
    }

    @Override
    public V compute(A arg) throws InterruptedException, ExecutionException {
        while (true) {
            Future
f = cache.get(arg);
            if (f == null) {
                Callable
eval = () -> c.compute(arg);
                FutureTask
ft = new FutureTask<>(eval);
                f = cache.putIfAbsent(arg, ft);
                if (f == null) {
                    f = ft;
                    ft.run();
                }
                try {
                    return f.get();
                } catch (CancellationException e) {
                    cache.remove(arg, f);
                }
            } else {
                return f.get();
            }
        }
    }
}

Implementation Sketch

Below is a brief illustration of how each consideration can be realized in code.

1. Data Structure Example

Map
cache = new ConcurrentHashMap<>();

MyBatis uses a plain HashMap for its second‑level cache, which is wrapped by SynchronizedCache to ensure thread safety.

2. Capacity Limit Example

Define a default maximum size (e.g., 1024). When the cache reaches this size, trigger the eviction strategy described earlier.

3. Eviction Strategy Example

Implement LRU using LinkedHashMap , FIFO with a Queue , LFU by tracking access counts, etc.

4. Expiration Example

Passive deletion is shown above; active deletion can be scheduled with a background thread or scheduled executor.

5. Thread‑Safety Example

Use ConcurrentHashMap or synchronized wrappers as demonstrated.

6. Simple API Example

Expose the methods defined in the Cache interfaces.

7. Persistence Example

Configure the cache to write entries to disk if needed.

8. Blocking Example

The Memoizer pattern ensures that only one thread computes a missing value while others wait.

Conclusion

The article enumerates the major factors to consider when designing a local cache: data structure, capacity limit, eviction policy, expiration handling, thread safety, blocking mechanism, a concise API, and optional persistence. Additional considerations may exist depending on specific use cases.

JavaCacheconcurrencylocal cacheEviction
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.