Master Oracle Data Pump: expdp & impdp Commands, Parameters, and Troubleshooting
This guide provides a comprehensive overview of Oracle Data Pump utilities, detailing how to create directories, use expdp and impdp commands for exporting and importing schemas, tables, and full databases, explaining key parameters, interactive mode, and common error resolutions.
Introduction
expdp and impdp are Oracle Data Pump utilities used to move data between Oracle databases. They run only on the database server, not on client machines. This article summarizes the most frequently used commands and parameters, with examples and troubleshooting tips.
Directory Management
Before using Data Pump, a directory object must exist and be granted appropriate privileges.
select * from dba_directories; create directory my_dir as '/home/oracle/tmp'; grant read, write on directory my_dir to scott;Export (expdp) Commands
Key considerations for export:
The exporting user needs read/write access to the directory object.
The directory path must exist on the OS.
The Oracle OS user must have permission to read/write the path.
Exporting as SYSTEM also exports system metadata, which regular users cannot export.
Typical expdp command examples:
# Export a single table expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log tables=scott.emp # Export multiple tables expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log tables=\(scott.emp,scott.dept\) # Export an entire schema expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott # Export the whole database (excluding SYS, ORDSYS, MDSYS) expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log full=yesAdvanced export options include parallel execution, metadata‑only export, and selective object inclusion/exclusion:
# Parallel export expdp system/oracle directory=my_dir dumpfile=expdp%U.dmp logfile=expdp.log schemas=scott parallel=5 # Export only metadata expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott content=metadata_only # Export only procedures expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott include=procedure # Exclude indexes expdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=expdp.log schemas=scott exclude=indexexpdp Parameter Details
attach=[schema_name.]job_name # Attach to a running job (interactive mode) full=[yes|no] # Export all data and metadata (requires datapump_exp_full_database role) schemas=schema_name[,...] # Export one or more schemas (default: current user) tables=table_name[:partition][,...] # Export specific tables tablespaces=tablespace_name[,...] # Export specific tablespaces transport_tablespaces=... # Export transportable tablespaces query=... # Export rows that satisfy a WHERE clause (mutually exclusive with other filters) exclude=object_type[:name_clause][,...]# Exclude certain object types include=object_type[:name_clause][,...]# Include only certain object types directory=directory_object # Directory for dump and log files (default: DATA_PUMP_DIR) dumpfile=[directory_object:]file_name[,...] # Dump file name (default: expdat.dmp) logfile=[directory_object:]file_name # Log file name (default: export.log) content=[all|data_only|metadata_only] # What to export (default: all) parallel=integer # Degree of parallelism (default: 1) compression=[all|data_only|metadata_only|none] # Compression level (default: metadata_only) parfile=[directory_path]file_name # Parameter file for additional options network_link=source_database_link # Export directly from a remote database filesize=integer[b|kb|mb|gb|tb] # Maximum size of each dump file (default: unlimited) job_name=jobname_string # Name of the export job version=[compatible|latest|version_string] # Compatibility mode for the dump fileImport (impdp) Commands
Important notes for import:
expdp‑generated dump files must be imported with impdp, not the older imp utility.
If an object already exists, impdp skips it by default and continues.
Ensure that the target tablespace and schema match the dump file’s definitions.
Typical impdp command examples:
# Import everything from a dump file impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log full=yes # Import a single table impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log tables=scott.emp # Import multiple tables impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log tables=\(scott.emp,scott.dept\) # Import a schema impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log schemas=scott # Parallel import impdp system/oracle directory=my_dir dumpfile=expdp%U.dmp logfile=impdp.log parallel=5 # Import only metadata impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log content=metadata_only # Remap table name during import impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log remap_table=scott.emp:emp1 # Remap schema name impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log remap_schema=scott:tim # Write all DDL to a separate SQL file impdp system/oracle directory=my_dir dumpfile=expdp.dmp logfile=impdp.log sqlfile=scott.sqlimpdp Parameter Details
attach=[schema_name.]job_name # Attach to a running import job full=[yes|no] # Import all data and metadata schemas=schema_name[,...] # Import one or more schemas tables=table_name[:partition][,...] # Import specific tables tablespaces=tablespace_name[,...] # Import specific tablespaces transport_tablespaces=... # Import transportable tablespaces query=... # Import rows that satisfy a WHERE clause exclude=object_type[:name_clause][,...]# Exclude certain object types include=object_type[:name_clause][,...]# Include only certain object types directory=directory_object # Directory for dump and log files (default: DATA_PUMP_DIR) dumpfile=[directory_object:]file_name[,...] # Dump file name (default: expdat.dmp) logfile=[directory_object:]file_name # Log file name (default: export.log) content=[all|data_only|metadata_only] # What to import (default: all) parallel=integer # Degree of parallelism (default: 1) compression=[all|data_only|metadata_only|none] # Compression level (default: metadata_only) parfile=[directory_path]file_name # Parameter file for additional options network_link=source_database_link # Import directly from a remote database job_name=jobname_string # Name of the import job version=[compatible|latest|version_string] # Compatibility mode for the dump file remap_table=old:new # Rename a table during import remap_schema=old:new # Rename a schema during import remap_tablespace=old:new # Rename a tablespace during import transform=transform_name:value[:object_type] # Modify DDL during import sqlfile=[directory_object:]file_name # Write all DDL to a separate file table_exists_action=[SKIP|APPEND|TRUNCATE|REPLACE] # Action when target table already existsInteractive Mode
During an export or import, press Ctrl+C to enter interactive mode, then use expdp attach=jobname or impdp attach=jobname to attach to the running job. Job names can be found in the log file or by querying dba_datapump_jobs.
Common Errors and Solutions
Directory not created
ORA-39002: invalid operation ORA-39070: Unable to open the log file. ORA-29283: invalid file operationEnsure the directory object exists, points to a valid OS path, and the Oracle OS user has read/write permissions.
Importing an exp‑generated dump with impdp
ORA-39000: bad dump file specification ORA-39143: dump file "/u01/20161031/bjh02.dmp" may be an original export dump fileexpdp and impdp are not compatible; use the matching utility.
Version mismatch
ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-31640: unable to open dump file "/home/oracle/EXPDP20161024_1.DMP" for read ORA-27037: unable to obtain file statusIf the source database version is newer than the target, add version=<target_version> to the export command.
ITPUB
Official ITPUB account sharing technical insights, community news, and exciting events.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
