Mastering Oracle Wait Events: How Latch Contention Impacts Performance
This article explains how Oracle wait events, especially latch contention such as cache buffers chains and LRU chain latches, reveal performance bottlenecks, and provides step‑by‑step methods, diagnostic tools, SQL examples, and tuning recommendations to locate and resolve database hotspots.
Oracle wait events serve as a window for DBAs to diagnose performance problems, but not every wait indicates an issue; the challenge is to trace the root cause through the cascade of events.
Four Main Optimization Areas
Database performance can be improved by focusing on resource optimization, instance optimization, SQL optimization, and overall database architecture. SQL tuning is the most critical among them.
Typical Optimization Process
Define optimization goals and direction.
Collect database metrics.
Adjust configuration (beyond simple parameter tweaks).
Re‑collect metrics to verify improvements.
An analogy compares the process to a doctor diagnosing a patient: identifying symptoms, measuring vitals, diagnosing, prescribing treatment, and confirming recovery.
Key Diagnostic Tools
Older tools such as statspack, sql_trace, and events 10046/10053 provide raw statistics for manual analysis. Since Oracle 10g, the Automatic Database Diagnostic Monitor (ADDM) and SQL Tuning Advisor (STA) automate data collection in the Automatic Workload Repository (AWR) and generate tuning suggestions, dramatically reducing analysis time.
Latch: cache buffers chains
Oracle protects hash bucket structures in the shared pool with a cache buffers chains latch . When multiple processes scan the buffer cache simultaneously, they contend for this latch, leading to the cache buffers chains wait event.
Example SQL to view latch statistics:
SELECT x.ksppinm name,
y.ksppstvl value,
y.ksppstdf isdefault,
DECODE(BITAND(y.ksppstvf,7),1,'MODIFIED',4,'SYSTEM_MOD') ismod,
DECODE(BITAND(y.ksppstvf,2),2,'TRUE','FALSE') isadj
FROM sys.x$ksppi x, sys.x$ksppcv y
WHERE x.inst_id = USERENV('Instance')
AND y.inst_id = USERENV('Instance')
AND x.indx = y.indx
AND x.ksppinm LIKE '%db_block_hash%'
ORDER BY TRANSLATE(x.ksppinm,' _','');Typical causes of contention:
Inefficient SQL that scans large tables or indexes, causing many processes to request the latch.
Hot blocks accessed repeatedly by many sessions.
Case Study 1 – Reproducing Cache Buffers Chains Contention
Steps:
Create a test table and insert a row.
Retrieve the row’s ROWID and derive its file and block numbers using DBMS_ROWID.
Query x$bh to get the latch address (HLADDR) for that block.
Check the latch’s GETS count via v$latch_children.
Perform a SELECT on the row again; the latch GETS should increase by two (buffer pin acquisition and release).
Use oradebug setmypid and oradebug poke to manually set and clear the latch, observing the effect on concurrent sessions.
Open a second session (SID 768), repeat the SELECT, and note that no blocking occurs until an UPDATE is issued, which forces exclusive latch acquisition.
Observe the wait event latch: cache buffers chains in v$session for the blocking session.
Release the latch with oradebug poke … 0.
Result: A logical read on a block incurs two latch operations; an UPDATE causes exclusive latch mode, leading to observable contention.
Latch: cache buffers LRU chain
Background: When a process needs a buffer not yet in memory, it allocates a free buffer from the LRU list, requiring the cache buffers lru chain latch. The DBWR process also uses this latch when moving dirty buffers to the LRU list.
Contention arises from excessive requests for free buffers, typically caused by inefficient SQL that scans large data sets or by heavy write activity.
Shared Pool and Library Cache
The shared pool stores parsed SQL, packages, and other objects. Hard parses allocate new memory and acquire the shared pool latch, while soft parses reuse existing cursors. High version counts for the same SQL increase latch traffic.
Best practices to reduce latch pressure:
Prefer bind variables over literals for OLTP workloads.
Avoid DDL during peak hours.
Adjust cursor_sharing (use FORCE cautiously).
Increase session_cached_cursors to keep frequently used cursors in session cache.
Set _sqlexec_progression_cost to 0 to prevent high version counts.
Pin frequently used objects with DBMS_SHARED_POOL.KEEP.
Queries to identify problematic SQL:
SELECT SUBSTR(sql_text,1,40) "SQL", invalidations
FROM v$sqlarea
ORDER BY invalidations DESC; SELECT address, hash_value, version_count, users_opening, users_executing,
SUBSTR(sql_text,1,40) "SQL"
FROM v$sqlarea
WHERE version_count > 10;Real‑World Case: Network‑Related Waits and Data Guard
An AWR report from a lightly loaded system showed high wait times for events such as log file sync, SQL*Net message from/to client, and various redo transport waits. Investigation revealed that the primary database’s network link to its Data Guard standby was saturated, causing commit latency and overall slowdown.
Key takeaways:
Even idle systems can suffer from network‑related waits that dominate DB time.
Analyzing foreground and background wait classes helps pinpoint whether the issue is client‑side, network, or storage.
Properly sizing and monitoring the network between primary and standby is essential for DG environments.
Overall, understanding Oracle wait events—especially latch contention—and applying systematic diagnostics (ADDM, STA, AWR) enable DBAs to locate hotspots, reduce unnecessary parsing, and improve both OLTP and OLAP performance.
Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
dbaplus Community
Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
