Master Java Concurrency: Locks, Singleton Patterns, ThreadLocal, Reflection and More
This article provides a comprehensive guide to Java concurrency and related concepts, covering synchronized lock upgrades, object vs class locks, lazy and double‑checked singleton implementations, ThreadLocal mechanics, reflection usage, annotation scopes, JVM class loading, and Redis cluster threading behavior.
The article begins with a discussion of synchronized lock upgrades in Java, describing the transition sequence from no lock to biased lock, lightweight lock, and heavyweight lock, and explains each stage in detail.
No lock : default state before biased locking is enabled; can be configured via JVM flags.
Biased lock : when a thread first acquires the lock, subsequent acquisitions by the same thread are fast because they only compare thread IDs.
Lightweight lock : uses CAS to acquire the lock and stores a pointer to a displaced MarkWord; releases also use CAS.
Heavyweight lock : when contention occurs, the lock inflates to a heavyweight lock, causing the thread to block and be scheduled by the OS, reducing CPU usage.
It then compares locking a regular object (e.g., this) versus locking a .class object. Object‑level locks only synchronize threads that contend for the same instance, while class‑level locks synchronize all threads accessing any instance of that class because the Class object is unique per JVM.
public class ObjectLockDemo {
private int instanceCount = 0;
public synchronized void addInstance() { instanceCount++; }
public static void main(String[] args) {
ObjectLockDemo obj1 = new ObjectLockDemo();
ObjectLockDemo obj2 = new ObjectLockDemo();
new Thread(() -> { for (int i = 0; i < 3; i++) obj1.addInstance(); }, "ThreadA").start();
new Thread(() -> { for (int i = 0; i < 3; i++) obj2.addInstance(); }, "ThreadB").start();
}
}Running this code shows that ThreadA and ThreadB can execute concurrently because they lock different objects; each instance maintains its own instanceCount. In contrast, locking on ClassLockDemo.class forces all threads to serialize:
public class ClassLockDemo {
private static int staticCount = 0;
public void addStatic() {
synchronized (ClassLockDemo.class) { staticCount++; }
}
public static void main(String[] args) {
ClassLockDemo obj1 = new ClassLockDemo();
ClassLockDemo obj2 = new ClassLockDemo();
new Thread(() -> { for (int i = 0; i < 3; i++) obj1.addStatic(); }, "ThreadC").start();
new Thread(() -> { for (int i = 0; i < 3; i++) obj2.addStatic(); }, "ThreadD").start();
}
}The article then explains lazy‑initialized singletons, highlighting that the naïve version is not thread‑safe. It introduces synchronized‑method singleton, then presents the double‑checked locking (DCL) pattern with volatile to prevent instruction reordering:
public class LazySingleton {
private static volatile LazySingleton instance;
private LazySingleton() {}
public static LazySingleton getInstance() {
if (instance == null) {
synchronized (LazySingleton.class) {
if (instance == null) {
instance = new LazySingleton();
}
}
}
return instance;
}
}Key points of DCL are: (1) first null check avoids unnecessary locking, (2) the synchronized block locks the unique Class object, and (3) the second null check prevents duplicate creation after threads acquire the lock.
Next, the article covers ThreadLocal, describing its internal structure: each Thread holds a ThreadLocalMap where the key is the ThreadLocal instance itself and the value is the thread‑specific data. It lists benefits (thread isolation, reduced coupling, performance) and warns about potential memory leaks if remove() is not called.
Reflection is then introduced as a runtime mechanism to obtain class metadata and manipulate objects without compile‑time knowledge. The steps are: obtain a Class object, retrieve fields/methods, and invoke them. An example shows using Class.forName to call setName on a User object.
Annotation retention policies are explained: SOURCE (discarded after compilation), CLASS (present in bytecode but not at runtime), and RUNTIME (available via reflection). Typical use‑cases for each scope are provided.
The article also clarifies that Redis operates with a single‑threaded event loop per node, so there is no multi‑threading issue within a node. In cluster mode, each key is handled by a single primary node, ensuring sequential processing for that key. It notes that distributed consistency concerns (e.g., replication lag, split‑brain) are separate from multi‑threading.
Finally, a brief algorithmic challenge is mentioned: given an unsorted array, return the maximum adjacent difference after sorting with O(n) time and space, hinting at a linear‑time bucket‑sort solution.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
macrozheng
Dedicated to Java tech sharing and dissecting top open-source projects. Topics include Spring Boot, Spring Cloud, Docker, Kubernetes and more. Author’s GitHub project “mall” has 50K+ stars.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
