Using DTLE 4.22.01.0 for Oracle‑MySQL Incremental Data Synchronization
This article provides a step‑by‑step guide on configuring DTLE 4.22.01.0 to perform Oracle‑to‑MySQL incremental data synchronization, covering type mapping, DML/DDL support, environment setup, job creation, data verification, and handling of compatibility limitations.
Introduction
The DTLE 4.22.01.0 release adds support for Oracle‑MySQL incremental data synchronization. This guide explains the feature and how to use it.
1. Current Status
Supported Incremental Sync
Enable incremental replication based on SCN nodes.
Enable incremental replication based on task start time.
Type Mapping
Supported Oracle‑to‑MySQL type conversions:
Oracle
MySQL
Limitations
BINARY_DOUBLE
float
MySQL does not support Inf/-Inf/NaN; use NULL.
CHAR(n), CHARACTER(n)
CHAR(n), CHARACTER(n)
DATE
datetime
DECIMAL(p,s), DEC(p,s)
DECIMAL(p,s), DEC(p,s)
DOUBLE PRECISION
DOUBLE PRECISION
FLOAT(p)
DOUBLE
INTEGER, INT
INT
Extreme value issues (see issue #825).
INTERVAL DAY(p) TO SECOND(s)
VARCHAR(30)
INTERVAL YEAR(p) TO MONTH
VARCHAR(30)
NCHAR VARYING(n)
NCHAR VARYING(n)
NCHAR(n)
NCHAR(n)/NVARCHAR(n)
NUMBER(p,0), NUMBER(p)
TINYINT/SMALLINT/INT/BIGINT/DECIMAL(p)
NUMBER(p,s)
DECIMAL(p,s)
NUMBER, NUMBER(*)
DOUBLE
NUMERIC(p,s)
NUMERIC(p,s)
NVARCHAR2(n)
NVARCHAR(n)
RAW(n)
VARBINARY(n)
REAL
DOUBLE
ROWID
CHAR(100)
SMALLINT
DECIMAL(38)
TIMESTAMP(p)
datetime
VARCHAR2(n)
VARCHAR(n)
VARCHAR(n)
VARCHAR(n)
Pending Types
Oracle
MySQL
Reason Not Supported
BINARY_FLOAT
float
MySQL does not support Inf/-Inf/NaN; float cannot match precisely.
BLOB
BLOB
Current logic cannot fetch enough values from redoSQL.
CLOB
CLOB
Current logic cannot fetch enough values from redoSQL.
LONG
LONGTEXT
Only INSERT supported.
LONG RAW
LONGBLOB
Only INSERT supported.
NCLOB
TEXT
Cannot fetch enough values from redoSQL.
TIMESTAMP(p) WITH TIME ZONE
datetime
Timezone handling not implemented.
Unsupported Types
Oracle
MySQL
Reason
BFILE
VARCHAR(255)
LogMiner does not support.
UROWID(n)
VARCHAR(n)
LogMiner cannot construct new SQL.
XMLTYPE
VARCHAR(30)
LogMiner does not support.
2. DML Support
a. DML Types
DML Type
Oracle SQL
MySQL SQL
INSERT
INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (0, NULL)
replace into
ACTION_DB.
CHAR_255_COLUMNS (COL1
,COL2`) values ('0', NULL)
UPDATE
UPDATE ACTION_DB.CHAR_255_COLUMNS SET COL2='abcdefghijklmnopqrstuvwxyz' WHERE COL1=0
update
ACTION_DB.
CHAR_255_COLUMNSset
COL1='0',
COL2='abcdefghijklmnopqrstuvwxyz' where ((
COL1= '0') and (
COL2is NULL)) limit 1
DELETE
DELETE FROM ACTION_DB.CHARACTER_255_COLUMNS WHERE COL1=0
delete from
ACTION_DB.
CHAR_255_COLUMNSwhere ((
COL1= '0') and (
COL2= 'abcdefghijklmnopqrstuvwxyz')) limit 1
b. DML Function Support
Function
Supported
Notes
CURRENT_TIMESTAMP
Yes
DATE
Yes
EMPTY_BLOB
Yes
Parsed as NULL
EMPTY_CLOB
Yes
Parsed as NULL
HEXTORAW
Yes
LOCALTIMESTAMP
Yes
RAWTOHEX
Yes
RAWTOHEX(CHR())
Yes
SYSTIMESTAMP
Yes
TO_DATE
Yes
TO_DSINTERVAL
Yes
TO_TIMESTAMP
Yes
TO_YMINTERVAL
Yes
UNISTR
Yes
3. DDL Support
DDL
Target
Option
CREATE
Table
DEFAULT CREATE
ALTER
Table
Add column
Delete column
Rename column (MySQL 8.0 syntax only)
Change column type
DROP
Table
DEFAULT DROP
4. Environment Preparation
1. Enable Oracle Archive Log
shell> su oracle
shell> mkdir /u01/app/oracle/oradata/archive_log
shell> sqlplus sys/oracle as sysdba
SQL> alter system set log_archive_dest_1='location=/u01/app/oracle/oradata/archive_log' scope=spfile;
SQL> alter system set db_recovery_file_dest_size = 10G;
SQL> shutdown immediate;
SQL> startup mount;
SQL> alter database add logfile group 3 '/u01/app/oracle/fast_recovery_area/XE/onlinelog/redo01.log' size 500m;
SQL> alter database add logfile group 4 '/u01/app/oracle/fast_recovery_area/XE/onlinelog/redo02.log' size 500m;
SQL> alter database add logfile group 5 '/u01/app/oracle/fast_recovery_area/XE/onlinelog/redo03.log' size 500m;
SQL> alter database archivelog;
SQL> alter database add supplemental log data (all) columns;
SQL> alter database open;
SQL> archive log list;
# Archive Mode indicates archiving is enabled, destination shows log storage path2. Install LogMiner (default with Oracle)
Check for dbms_logmnr and dbms_logmnr_d packages; if missing, run:
shell> cat $ORACLE_HOME/rdbms/admin/dbmslm.sql | sqlplus sys/oracle as sysdba
shell> cat $ORACLE_HOME/rdbms/admin/dbmslmd.sql | sqlplus sys/oracle as sysdba3. Create LogMiner Role
shell> su oracle
shell> sqlplus sys/oracle as sysdba
SQL> create user roma_logminer identified by oracle default tablespace users;
SQL> GRANT CREATE SESSION TO roma_logminer;
SQL> GRANT SET CONTAINER TO roma_logminer;
SQL> GRANT SELECT ON V_$DATABASE TO roma_logminer;
SQL> GRANT FLASHBACK ANY TABLE TO roma_logminer;
SQL> GRANT SELECT ANY TABLE TO roma_logminer;
SQL> GRANT SELECT_CATALOG_ROLE TO roma_logminer;
SQL> GRANT EXECUTE_CATALOG_ROLE TO roma_logminer;
SQL> GRANT SELECT ANY TRANSACTION TO roma_logminer;
SQL> GRANT CREATE TABLE TO roma_logminer;
SQL> GRANT LOCK ANY TABLE TO roma_logminer;
SQL> GRANT CREATE SEQUENCE TO roma_logminer;
SQL> GRANT EXECUTE ON DBMS_LOGMNR TO roma_logminer;
SQL> GRANT EXECUTE ON DBMS_LOGMNR_D TO roma_logminer;
SQL> GRANT SELECT ON V_$LOG TO roma_logminer;
SQL> GRANT SELECT ON V_$LOG_HISTORY TO roma_logminer;
SQL> GRANT SELECT ON V_$LOGMNR_LOGS TO roma_logminer;
SQL> GRANT SELECT ON V_$LOGMNR_CONTENTS TO roma_logminer;
SQL> GRANT SELECT ON V_$LOGMNR_PARAMETERS TO roma_logminer;
SQL> GRANT SELECT ON V_$LOGFILE TO roma_logminer;
SQL> GRANT SELECT ON V_$ARCHIVED_LOG TO roma_logminer;
SQL> GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO roma_logminer;
SQL> alter user roma_logminer quota unlimited ON users;4. Deploy Single‑Node DTLE 4.22.01.0
Download the RPM:
https://github.com/actiontech/dtle/releases/download/v4.22.01.0/dtle-ce-4.22.01.0.x86_64.rpm5. Create Oracle‑MySQL Migration Job
1. Obtain Token
shell> curl -s -X POST "http://172.100.9.11:8190/v2/loginWithoutVerifyCode" -H "accept: application/json" -H "Content-Type: application/json" -d "{ \"password\": \"admin\", \"tenant\": \"platform\", \"username\": \"admin\"}" | jq
{"message":"ok","data":{"token":"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."}}2. Prepare job.json
{
"job_id": "test_oracle",
"src_task": {
"connection_config": {
"database_type": "Oracle",
"host": "172.100.9.31",
"port": 1521,
"user": "roma_logminer",
"password": "oracle",
"service_name": "XE"
},
"node_id": "96d28881-f91b-19f4-1614-4d8a0e718e2f",
"binlog_relay": false,
"repl_chan_buffer_size": 120,
"group_max_size": 1,
"group_timeout": 100,
"oracle_src_task_config": {"scn": 0},
"task_name": "src",
"replicate_do_db": [{"table_schema": "ACTION_DB"}]
},
"is_password_encrypted": false,
"dest_task": {
"connection_config": {
"database_type": "MySQL",
"host": "172.100.9.1",
"port": 3306,
"user": "test_dest",
"password": "test_dest"
},
"node_id": "96d28881-f91b-19f4-1614-4d8a0e718e2f",
"task_name": "dest",
"mysql_dest_task_config": {}
},
"task_step_name": "job_stage_full",
"failover": true,
"retry": 2
}3. Create Job via API
shell> curl -s -X POST "http://172.100.9.11:8190/v2/job/migration/create" -H "accept: application/json" -H "Authorization:
" -H "Content-Type: application/json" -d @job.json | jq
{"job":{"job_id":"test_oracle-migration","task_step_name":"job_stage_full","reverse":false,"failover":true,"is_password_encrypted":false,...}}4. Insert Test Data into Oracle
SQL> CREATE TABLESPACE ACTION_DB datafile 'ACTION_DB.dbf' size 100M;
SQL> CREATE USER ACTION_DB identified by ACTION_DB default tablespace ACTION_DB;
SQL> GRANT unlimited tablespace to ACTION_DB;
SQL> CREATE TABLE ACTION_DB.CHAR_255_COLUMNS(col1 INT, col2 CHAR(255));
SQL> INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (0, NULL);
SQL> INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (1, 'abcdefghijklmnopqrstuvwxyz');
SQL> INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (2, 'ABCDEFGHIJKLMNOPQRSTUVWXYZ');
SQL> INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (3, '1234567890');
SQL> INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (4, 1234567890);
SQL> INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (5, '~`!@#$%^&*()-=_+{}[]|\:;<>>,.?/');
SQL> INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (6, '中文测试abc');
SQL> INSERT INTO ACTION_DB.CHAR_255_COLUMNS VALUES (7, '·~!@#¥%……&*()-=——+{}【】、|;‘:“,。/《》?');5. Verify Synchronization on MySQL
mysql> SHOW CREATE TABLE ACTION_DB.CHAR_255_COLUMNS\G
Create Table: CREATE TABLE `CHAR_255_COLUMNS` (
`COL1` int(11) DEFAULT NULL,
`COL2` char(255) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
mysql> SELECT * FROM ACTION_DB.CHAR_255_COLUMNS;
+------+-----------------------------------------------------------------------------+
| COL1 | COL2 |
+------+-----------------------------------------------------------------------------+
| 0 | NULL |
| 1 | abcdefghijklmnopqrstuvwxyz |
| 2 | ABCDEFGHIJKLMNOPQRSTUVWXYZ |
| 3 | 1234567890 |
| 4 | 1234567890 |
| 5 | ~`!@#$%^&*()-=_+{}[]|\:;<>>,.?/ |
| 6 | 中文测试abc |
| 7 | ·~!@#¥%……&*()-=——+{}【】、|;‘:“,。/《》? |
+------+-----------------------------------------------------------------------------+6. Usage Limitations
Because Oracle and MySQL are heterogeneous, some Oracle SQL statements may not translate correctly to MySQL (e.g., numeric ranges, pre‑AD era dates). By default DTLE stops on errors.
To skip such errors, add the environment variable SkipErr=true to the DTLE service file and restart:
shell> vi /etc/systemd/system/multi-user.target.wants/dtle-nomad.service
# Add
[Service]
Environment="SkipErr=true"
shell> systemctl daemon-reload
shell> systemctl restart dtle-nomadIf you encounter issues, contact the DTLE community.
DTLE repository: https://github.com/actiontech/dtle
DTLE documentation: https://actiontech.github.io/dtle-docs-cn/
Aikesheng Open Source Community
The Aikesheng Open Source Community provides stable, enterprise‑grade MySQL open‑source tools and services, releases a premium open‑source component each year (1024), and continuously operates and maintains them.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.