Understanding Java synchronized: Locks, Optimizations, and Example Code
This article explains how Java's synchronized keyword works, covering object and class locks, lock upgrades such as biased, lightweight, and heavyweight locks, the underlying monitor implementation, example code for a thread‑safe counter, and performance considerations for high‑concurrency scenarios.
In Java multithreaded programming, the synchronized keyword is a fundamental tool for ensuring that only one thread can execute a protected code region at a time, providing thread safety for shared resources.
The article first describes the two basic lock types:
Object lock : applied when synchronized is used on an instance method or block, locking the specific object instance.
Class lock : applied when synchronized is used on a static method, locking the Class object itself.
public synchronized void method() {
// thread‑safe operation
} synchronized(this) {
// thread‑safe operation
}Next, the article explains lock upgrades and optimizations that the JVM performs to improve performance:
Biased lock : the lock is biased toward the first thread that acquires it, avoiding further locking overhead if the same thread continues to hold it.
Lightweight lock : when contention appears, the biased lock upgrades to a lightweight lock using CAS (Compare‑And‑Swap) operations, which are faster than OS‑level locks.
Heavyweight lock : if contention persists, the lock escalates to a heavyweight (monitor) lock, involving OS thread scheduling and higher overhead.
The underlying implementation relies on the object header and a Monitor object. Each Java object contains an object header that stores lock state; the JVM creates a Monitor associated with the object to manage lock acquisition and release.
To illustrate these concepts, a simple Counter class is presented:
public class Counter {
private int count = 0;
// lock the method
public synchronized void increment() {
count++;
}
public synchronized int getCount() {
return count;
}
}Both increment and getCount are synchronized, meaning they must obtain the object's monitor before execution, ensuring that concurrent threads modify count safely.
Finally, the article discusses performance impacts: while synchronized provides strong safety guarantees, excessive contention can cause thread blocking and even deadlocks if misused. In high‑concurrency scenarios, developers should consider finer‑grained locking mechanisms such as ReentrantLock to reduce overhead.
IT Services Circle
Delivering cutting-edge internet insights and practical learning resources. We're a passionate and principled IT media platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.