Why Caching Timestamps Can Slash CPU Usage in High‑QPS Java and Go Services
This article explores how naive timestamp retrieval can become a CPU bottleneck under high concurrency, demonstrates cache‑based optimizations used in Alibaba's Cobar and Sentinel projects, presents benchmark results, and proposes an adaptive algorithm to enable or disable caching based on real‑time QPS.
Hello, I'm Xiao Lou. While browsing GitHub I found an Alibaba open‑source project that invites contributions to performance optimization, specifically timestamp caching.
Typical code for obtaining a timestamp in Java: long ts = System.currentTimeMillis(); And in Go:
UnixTimeUnitOffset = uint64(time.Millisecond / time.Nanosecond)
ts := uint64(time.Now().UnixNano()) / UnixTimeUnitOffsetAlthough this works fine in most cases, a study ( link ) shows that under high concurrency the call becomes noticeably slower because all threads contend for a single global clock source.
Timestamp Caching
The Cobar database middleware solves this by caching timestamps:
Start a dedicated thread that updates the cached timestamp every 20 ms.
When a timestamp is needed, read the cached value.
Key implementation ( TimeUtil.java ):
public class TimeUtil {
private static long CURRENT_TIME = System.currentTimeMillis();
public static final long currentTimeMillis() { return CURRENT_TIME; }
public static final void update() { CURRENT_TIME = System.currentTimeMillis(); }
}And the scheduled update task ( CobarServer.java ):
timer.schedule(updateTime(), 0L, TIME_UPDATE_PERIOD); // TIME_UPDATE_PERIOD = 20ms
private TimerTask updateTime() {
return new TimerTask() {
@Override
public void run() { TimeUtil.update(); }
};
}This works because Cobar’s QPS is extremely high, and the cached timestamp only needs weak precision for internal statistics.
Sentinel, Alibaba’s flow‑control library, adopts a similar cache to reduce its own overhead, but with a finer granularity (1 ms) because the timestamp directly influences rate‑limiting decisions.
Testing revealed that the cached‑timestamp implementation in Sentinel consumes about 50 % of CPU—a classic case of “negative ROI.” Benchmarks showed that only when QPS exceeds roughly 4 000 does the cache provide a net performance gain; below that, the overhead outweighs the benefit, so Sentinel‑Go disables the feature by default.
Adaptive Algorithm
Sentinel (Java ≥ 1.8.2) implements an adaptive strategy that switches between direct system calls and cached reads based on real‑time QPS. The cache loop runs every millisecond and maintains three states:
RUNNING : cache is active and QPS is being measured.
IDLE : idle state, sleeping 300 ms.
PREPARE : prepares to transition from IDLE to RUNNING.
The state transition is driven by the check method, executed every 3 seconds. If the current state is IDLE and read QPS exceeds HITS_UPPER_BOUNDARY (1200), it moves to PREPARE, then to RUNNING. Conversely, if RUNNING and QPS falls below HITS_LOWER_BOUNDARY (800), it returns to IDLE. This smooth transition avoids timestamp drift when switching states.
During a read, if the state is RUNNING the cached value is returned; otherwise the system time is fetched directly. QPS statistics are collected via Sentinel’s LeapArray sliding window implementation.
Although the adaptive logic exists in Sentinel Java, Sentinel‑Go has not yet implemented it, presenting an opportunity for contributors.
Conclusion
Timestamp caching can dramatically reduce CPU usage in high‑QPS services, but only when the workload justifies it. An adaptive algorithm that enables caching only under sufficient load offers the best of both worlds, and implementing it in Sentinel‑Go would be a valuable contribution.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
Xiao Lou's Tech Notes
Backend technology sharing, architecture design, performance optimization, source code reading, troubleshooting, and pitfall practices
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
