Databases 8 min read

How to Export a 2TB Oracle LOB Table Using Rowid Partitioning

This article details a real‑world migration from Oracle 11g on IBM AIX to Oracle 19c, focusing on exporting a 2 TB LOB‑heavy table by adjusting undo retention, then using ROWID‑based modulo queries with Data Pump to achieve balanced, error‑free parallel exports.

dbaplus Community
dbaplus Community
dbaplus Community
How to Export a 2TB Oracle LOB Table Using Rowid Partitioning

Background

Oracle 11g reached the end of its premier support lifecycle, and after 2020 it is no longer supported. A client needed to migrate an application database from an IBM AIX mini‑mainframe environment to a domestic database appliance, upgrading the Oracle version directly from 11g to 19c.

Problem with LOB Columns

The overall database size is modest (about 3 TB), but a single table occupies roughly 2 TB and consists almost entirely of LOB columns. The table is not partitioned, making conventional export approaches risky.

Typical export attempts on such a large LOB table usually trigger the ORA‑01555 error.

Initial Mitigation Attempts

One common remedy is to increase the undo_retention parameter and adjust the LOB retention setting:

alter system set undo_retention=7200 scope=both;
alter table table_name MODIFY LOB(col_name)(retention);

However, in a production environment changing such parameters carries risk, so an alternative approach was sought.

Rowid‑Based Export Strategy

Since ORA‑01555 originates from long‑running queries, the team decided to reduce the amount of data exported per job by splitting the table using the ROWID pseudo‑column. ROWID encodes four parts: the data object ID, the relative file ID, the block number, and the row number.

An extended ROWID occupies 10 bytes (80 bits): obj# 32 bits, rfile# 10 bits, block# 22 bits, row# 16 bits. Consequently, a tablespace cannot have more than 1 023 data files, each file can contain up to 4 M blocks, and each block can hold up to 64 K rows.

Using the DBMS_ROWID package, the table can be exported in evenly sized batches with a query such as:

QUERY="WHERE MOD(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID),10)=1"

Export Script Modifications

The Data Pump (EXPDP) parameter file was updated to use DATA_ONLY compression and the ROWID‑based query. An example snippet is shown below:

USERID='/ as sysdba'
DIRECTORY=DMP
DUMPFILE=export.dmp
LOGFILE=export.log
CONTENT=DATA_ONLY
COMPRESSION=DATA_ONLY
TABLES=owner.table_name
QUERY="WHERE MOD(DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID),10)=1"

The choice of rowid_block_number works well because the table has a single OBJECT_ID, a limited number of file IDs (≈150), and many block numbers, so the modulo operation yields balanced partitions.

Execution Results

The job was launched with ten parallel processes, each handling one modulo value (0‑9). Log analysis showed that each batch consumed a similar amount of time, confirming an even distribution of work and successful export of the 2 TB LOB table.

Key Takeaways

When faced with exporting extremely large LOB tables, consider the following strategies:

If the table is partitioned, export each partition separately.

Identify a column with evenly distributed values and filter exports by that column.

Apply the ROWID‑based modulo technique described above to achieve uniform batch sizes without altering database parameters.

These methods reduce the amount of data processed per export, mitigate ORA‑01555 errors, and shorten overall migration time.

OracleDatabase MigrationExportLOBRowidData Pump
dbaplus Community
Written by

dbaplus Community

Enterprise-level professional community for Database, BigData, and AIOps. Daily original articles, weekly online tech talks, monthly offline salons, and quarterly XCOPS&DAMS conferences—delivered by industry experts.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.