Why Oracle AWR Is the Gold Standard for DB Performance and How Domestic Databases Compare
The article explains Oracle's Automatic Workload Repository (AWR) as a comprehensive performance‑diagnostic tool, breaks down its core functions, and then evaluates how several domestic databases such as Kingbase measure up in terms of report completeness, metric richness, SQL analysis, wait‑event handling, OS integration, and usability.
Oracle Automatic Workload Repository (AWR)
Oracle AWR is the core component of Oracle’s performance‑diagnosis and optimization framework. It automatically creates snapshots of the entire database instance at regular intervals (default 1 hour) and stores them in the SYSAUX tablespace. Snapshots can also be created on demand with DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT.
Core capabilities built on snapshots
Full‑stack data collection – captures macro‑level time‑model statistics (DB Time, DB CPU), detailed wait‑event counters, SQL execution metrics (elapsed time, CPU, logical/physical reads, executions, rows processed) and key operating‑system counters (CPU utilisation, memory pressure, disk I/O, network throughput).
Comparative analysis and trend assessment – a user can select two snapshots and generate a comparison report that shows absolute values, deltas and growth rates for every metric, enabling rapid identification of performance regressions.
Automated diagnosis – the built‑in Automatic Database Diagnostic Monitor (ADDM) analyses the data between snapshots, identifies root causes and issues actionable recommendations (e.g., create an index for a specific SQL_ID). The Activity Session History (ASH) samples active sessions every second, providing second‑level visibility for short‑duration spikes.
Operational impact
AWR turns database administration from reactive “fire‑fighting” into a proactive, data‑driven health‑management loop: monitor → analyse → optimise → verify . It lowers the expertise barrier by standardising reports, preserves organisational knowledge in repeatable diagnostics, supports capacity planning through long‑term trend data, and provides a common performance‑communication language for developers, operations and business stakeholders.
Domestic databases – Kingbase KWR as an AWR analogue
Kingbase implements a snapshot mechanism called KWR . A background process ( kwr_collector) automatically creates snapshots at a configurable interval (default 1 hour). DBAs can also invoke a manual snapshot via SQL. Each snapshot records the delta between two points in time for all performance views, thereby preserving historical statistics that would otherwise be overwritten.
The snapshots feed downstream tools such as KWR, KDDM and KWR_DIFF, which generate reports and support time‑model analysis similar to Oracle’s AWR.
Comparison dimensions
Report structure completeness – Oracle AWR includes load summary, time model, wait events, SQL statistics, memory cache, OS statistics and historical comparisons, all tightly linked. Many domestic products omit modules (e.g., detailed time‑model breakdown), provide weak data linkage, and lack OS integration.
Metric richness – Oracle captures a wide set of metrics and correlates them (e.g., DB Time broken into parse, execute, and CPU time; wait‑event histograms; OS CPU, memory, I/O). Domestic equivalents often miss deep metrics such as histogram distributions and cross‑layer correlations.
SQL analysis depth – Oracle ranks top SQL by multiple dimensions (elapsed time, CPU, logical/physical reads, executions, rows processed), links each SQL to its execution plan, bind variables and associated wait events. Most domestic databases expose only a single ranking dimension and do not capture execution plans or bind values.
Wait‑event analysis – Oracle provides total/average times, counts, histograms and ties each event to the offending SQL and OS resources. Domestic systems usually lack histograms, have coarse categorisation, and do not link events to SQL or OS metrics.
Operating‑system integration – Oracle integrates detailed OS metrics (CPU breakdown, memory pressure, disk I/O throughput/latency, network bandwidth) with database metrics, enabling end‑to‑end performance views. Many domestic products collect only basic OS data, if any, and do not correlate it with DB metrics.
Readability and interactivity – Oracle AWR reports are HTML‑based, colour‑coded, and support drill‑down links from SQL IDs to execution plans, as well as dynamic filtering/sorting. Domestic reports are often static, lack visual design and interactive drill‑downs, making them harder to use without deep DBA expertise.
Typical improvement roadmap for domestic implementations
Enrich core modules – add a complete time‑model breakdown, detailed wait‑event statistics and OS metric collection.
Strengthen metric correlation – automatically link SQL, wait events and OS counters to form an end‑to‑end evidence chain.
Introduce histogram statistics for wait events and I/O metrics to expose distribution patterns.
Provide execution‑plan capture and bind‑variable exposure for each top SQL.
Implement interactive HTML reports with drill‑down links, colour‑coded highlights and dynamic filtering.
Consider adding machine‑learning‑based anomaly detection to suggest proactive optimisation actions.
These steps would bring domestic databases closer to the diagnostic depth and usability of Oracle AWR.
dbaplus Community
Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
