Databases 10 min read

Migrating a Single‑Instance Oracle 11g Database to a 3‑Node RAC on Linux with Minimal Downtime

This guide details how to move an Oracle 11.2.0.3.0 single‑instance database from Windows to a three‑node RAC on Linux using Data Guard, covering compatibility, required patches, configuration steps, parameter settings, and post‑migration tasks to ensure a fast, low‑downtime transition.

ITPUB
ITPUB
ITPUB
Migrating a Single‑Instance Oracle 11g Database to a 3‑Node RAC on Linux with Minimal Downtime

A single‑instance Oracle Database 11.2.0.3.0 on Windows was migrated to a three‑node Oracle RAC on Linux without changing the database version and with minimal downtime.

Cross‑platform Data Guard (DG) configuration

Because the source and target versions are identical, a heterogeneous Active Data Guard (ADG) can be used. The compatibility matrix permits Windows x86_64 (12) as primary and Linux x86_64 (13) as standby for Oracle 11g, provided Patch 13104881 is applied on the Linux side. The patch is required only when synchronising from Windows (primary) to Linux (standby).

Migration workflow

Install Oracle Grid Infrastructure on the new Linux nodes.

Install Oracle RAC software.

Create ASM disk groups and configure the listener.

On the first RAC node, create a Windows‑to‑Linux ADG using the node’s VIP address in real‑time sync mode, storing control files, data files and redo logs on shared ASM.

Parameter settings for heterogeneous DG

Set DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT to map Windows paths to Linux ASM paths. These parameters require a restart only on the primary; the standby uses the values during the duplicate operation, allowing DG setup without restarting the primary.

LOG_FILE_NAME_CONVERT='+DATA01/dbm/onlinelog/','+DATA_DM01/dbm/onlinelog/','+FRA01/dbm/onlinelog/','+DBFS_DG/dbm/onlinelog/'
DB_FILE_NAME_CONVERT='+DATA01/dbm/datafile/','+DATA_DM01/dbm/datafile/','+DATA01/dbm/tempfile/','+DATA_DM01/dbm/tempfile/'

Include trailing slashes and cover all data, temporary and redo locations. Mirror the conversion values on primary and standby (e.g., primary uses ‘B‑location, A‑location’, standby uses the opposite).

Post‑DG troubleshooting

If log transport stalls, disable and re‑enable the destination:

alter system set log_archive_dest_state_2=defer;
alter system set log_archive_dest_state_2=enable;

Keep ORACLE_SID, instance_name and db_unique_name identical on primary and standby during DG setup.

Switchover to RAC primary

After the standby is fully synchronized, perform a switchover or, if real‑time sync is guaranteed, stop the original primary and activate the standby as a read‑write RAC node.

Converting the single‑instance database to RAC

Back up the original init.ora (pfile) and add RAC‑specific entries, e.g.:

*.cluster_database=TRUE
*.cluster_database_instances=2
*.undo_management=AUTO
undo_tablespace=UNDOTBS1
instance_name=ORCL1
instance_number=1
thread=1
local_listener=...

Modify control_files to point to a shared location on ASM.

Create an SPFILE from the modified pfile:

export ORACLE_SID=ORCL1
sqlplus "/ as sysdba"
create spfile='/spfileORCL.ora' from pfile='/tmp/initORCL.ora';
exit

Create password files for each instance: orapwd file=orapwORCL1 password=oracle Start each instance in mount mode, rename datafiles and redo logs to the shared ASM path, and add redo log groups for each thread, e.g.:

alter database add logfile thread 2 group 3 ('+DATA/redo2_01_100.dbf') size 100M;
alter database enable public thread 2;

Create additional undo tablespaces for the extra instances:

CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE '+DATA/undotbs2_01.dbf' SIZE 200M;

Open the database and run the cluster script to create RAC views: $ORACLE_HOME/rdbms/admin/catclust.sql On each additional node, set ORACLE_SID and ORACLE_HOME, create the corresponding init.ora and password file, then start the instance.

Register each instance with the cluster using srvctl (add database, add instance, modify ASM dependency if needed). Example:

srvctl add database -d ORCL -o /u01/app/oracle/product/19c/dbhome_1 -p /u01/app/oracle/product/19c/dbhome_1
srvctl add instance -d ORCL -i ORCL1 -n node1
srvctl modify instance -d ORCL -i ORCL1 -s +ASM1

Final notes

With proper preparation and scripting, the single‑instance to RAC conversion can be completed quickly. After RAC is operational, adjust DG parameters and IP addresses to finalize the cross‑platform migration.

Cross-PlatformLinuxOracleDatabase MigrationASMRACData Guard
ITPUB
Written by

ITPUB

Official ITPUB account sharing technical insights, community news, and exciting events.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.