A DBA’s 3‑Night Deep Dive: Comprehensive Oracle I/O Performance Diagnosis and Hands‑On Tactics
The article walks through Oracle response‑time analysis, compares service and wait time, presents two real‑world cases (Statspack pre‑10g and AWR post‑10g) that show I/O waits often contribute less than 6% of total time, and then details step‑by‑step methods, parameters, and storage techniques for diagnosing and reducing I/O‑related performance problems.
Identifying I/O‑related waits
AWR or Statspack reports I/O wait events in the “Top 5 Wait/Timed Events” section. SQL trace with session wait events shows the workload limited by I/O. OS tools show high utilization or saturation of the disks that store database files.
Response‑time analysis
Response Time = Service Time + Wait Time
Service Time is measured by the “CPU used by this session” statistic. Wait Time is the sum of time spent in wait events.
Using AWR/Statspack for tuning
The method evaluates the relative impact of each component of overall response time and directs tuning effort toward the component that consumes the most time. Since Oracle 10g the process is automated by the Automatic Database Diagnostic Monitor (ADDM).
References: Document 190124.1 THE COE PERFORMANCE METHOD; Document 250655.1 How to use the Automatic Database Diagnostic Monitor
Assessing the impact of I/O waits
Compare the total time spent in wait events with the session’s CPU time; if Service Time dominates, investigating the waits will not yield a significant response‑time reduction.
Before Oracle 9i Release 2 the list was called “Top 5 Wait Events”; later it became “Top 5 Timed Events” and the CPU column was renamed “CPU Time”.
Case 1 – Statspack before Oracle 9i Release 2
Top 5 Wait Events
Event Waits Time (cs) %
-------------------------------------------------
direct path read 4,232 10,827 52.
db file scattered read 6,105 6,264 30.
direct path write 1,992 3,268 15.
control file parallel write 893 198.
db file parallel write 40 131.
-------------------------------------------------Service‑Time statistic: CPU used by this session 358,806 cs Calculations:
Wait Time = 10,827 × 100 % / 52.01 % = 20,817 cs
Service Time = 358,806 cs
Response Time = 358,806 + 20,817 = 379,623 csBreakdown:
CPU time = 94.52 %
Direct path read = 2.85 %
Db file scattered read = 1.65 %
Direct path write = 0.86 %
Control file parallel write = 0.05 %
Db file parallel write = 0.03 %I/O‑related waits account for less than 6 % of total response time; tuning should focus on CPU‑intensive operations.
Case 2 – AWR after Oracle 10g
Top 5 Timed Foreground Events
Event Waits Time(s) % DB Wait Class
---------------------------------------------------------------
DB CPU 33,615 82.0 82.0 User I/O
db file sequential read 3,101,013 7,359 18.0 User I/O
log file sync 472,958 484 1.2 Commit
read by other session 46,134 291 0.7 User I/O
db file parallel read 91,982 257 0.6 User I/O
---------------------------------------------------------------Wait events contribute less than 20 % of total time, confirming that CPU is the dominant factor.
General remediation categories when I/O waits dominate
Reduce overall I/O generation – tune SQL that generates large I/O, size the SGA buffer cache appropriately, consider partitioning, use materialized views or result cache.
Control I/O at the SQL level – identify top‑SQL statements that cause full table scans or fast full index scans and rewrite them, rebuild fragmented indexes, shrink clustering factor. Example query:
SELECT sql_text FROM v$sql_plan p
WHERE p.operation='TABLE ACCESS' AND p.options='FULL'
ORDER BY p.hash_value, p.piece;From Oracle 10g onward, ADDM automatically identifies the most expensive SQL; the SQL Tuning Advisor can then be used to adjust execution plans (Document 262687.1).
Instance‑parameter tuning – increase buffer cache, keep hot segments in the KEEP pool, adjust db_file_multiblock_read_count (auto‑tuned from 10g R2 onward), enable asynchronous I/O, enable Direct I/O for large bulk operations (Document 1398860.1).
Storage‑level techniques – use Oracle ASM for automatic striping and load balancing (Document 1187723.1), consider RAID/SAN/NAS configurations, manually relocate heavily‑used data files to less‑saturated disks.
Specific wait‑event tuning – consult Oracle reference notes for each event:
db file sequential read – Document 1475825.1
db file scattered read – Document 1476092.1
direct path read / write – Document 1476089.1, Document 1475655.1
log file sync / parallel write – Document 34592.1, Document 34583.1
Key wait‑event details
db file sequential read
Most common I/O wait; typically a single‑block read of index or table data. Troubleshooting guidance is in Document 1475825.1 and Document 1477209.1.
db file scattered read
Occurs during multi‑block reads of non‑contiguous blocks (full table scans, fast full index scans). See Document 1476092.1 and Document 1475785.1.
direct path read / write
Used for bulk operations that bypass the buffer cache. Reference notes: Document 1476089.1 (read) and Document 1475655.1 (write).
log file sync and log file parallel write
log file sync is waited on by foreground processes during COMMIT/ROLLBACK; log file parallel write is waited on by LGWR when flushing redo. Guidance includes reducing redo generation, placing redo groups on fast disks, avoiding RAID‑5 for redo, and using NOLOGGING where appropriate (Document 34592.1, Document 34583.1).
Additional considerations
Always verify that the operating‑system and hardware layers are healthy; disk errors reported by the OS must be resolved before database‑level tuning can be effective.
For unresolved issues, open a thread in the Oracle MOS Database Tuning Community.
https://community.oracle.com/mosc/categories/database_tuningSigned-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
ITPUB
Official ITPUB account sharing technical insights, community news, and exciting events.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
