Databases 14 min read

How OGG Enables Multi‑Source Oracle Data Distribution in a Large‑Scale Environment

This article details a real‑world implementation of Oracle GoldenGate for multi‑source data distribution, covering the architectural design, OGG fundamentals, operational challenges, troubleshooting steps, and future upgrade directions in a production environment.

dbaplus Community
dbaplus Community
dbaplus Community
How OGG Enables Multi‑Source Oracle Data Distribution in a Large‑Scale Environment

Background and Requirement

A provincial telecom operator needed to replace a legacy mainframe‑based data replication method with Oracle GoldenGate (OGG) to distribute data from many Oracle 11.2.0.4 source databases to regional query nodes. Direct network connectivity between sources and targets was unavailable and local storage for queue files was insufficient.

Architecture Overview

A high‑capacity distribution center centralizes network paths and stores OGG trail files. Sources write trail files, which are transferred via TCP/IP to the distribution center, retained for extended periods, and then delivered to target databases for replay.

OGG Technical Principles

OGG captures changes from online redo logs or archived logs, converts them into a proprietary trail format, and transports them using a Capture process and a Delivery process. Checkpoints enable resume after interruptions, providing near‑real‑time (sub‑second) replication and break‑point‑resume capability.

Operational Experience

Supported data types are limited to common OLTP types; this covers most production tables.

Source I/O impact is minimal, but memory spikes up to 64 GB were observed; adding MEMORYLIMIT parameters mitigated the issue.

Archived‑log‑only (BR) mode must be disabled on standby servers to avoid process hangs.

OGG version on the source must be equal to or lower than the distribution‑center version; mismatches cause delivery failures.

DDL replication is possible: classic mode requires full‑database triggers, while Integrated Mode handles DDL without triggers.

Problem Solving

Enforced uniform OGG versions across source, hub, and target.

Compressed tables caused minor issues; otherwise table‑type support was adequate.

Disabled BR on standby servers to prevent hangs.

Moved to Integrated Mode for trigger‑free DDL replication.

Added memory‑limit parameters to control OGG memory consumption.

Implemented custom backup scripts to retain archived logs until OGG processed them, preventing log loss.

Future Upgrade Direction

The plan is to adopt OGG Integrated Mode (downstream) with a mining database that receives archived logs from all sources. The mining database will run in RAC for high availability, reducing the topology from “N sources + 1 hub + 1 target” to “1 mining DB + 1 target”. Open questions include compatibility when the mining database runs a newer Oracle release.

Conclusion

OGG meets the regional query requirements with acceptable latency (seconds to minutes, occasional hours during billing periods), low I/O overhead, and manageable operational complexity. Ongoing work focuses on Integrated Mode testing, version compatibility, and further automation.

operationsDatabase Architecturedata replicationVersion CompatibilityOracleGoldenGate
dbaplus Community
Written by

dbaplus Community

Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.