Databases 58 min read

Gaode’s Adoption of OceanBase: Architecture, Practices, and Future Roadmap

Gaode migrated its core navigation, traffic, and financial services to OceanBase, leveraging Paxos‑based multi‑replica consistency, LSM‑Tree storage, and distributed transactions, selecting multi‑point write for massive sync workloads and central‑write for latency‑critical queries, achieving sub‑millisecond latency, significant storage savings, and a roadmap toward broader cost‑effective, serverless deployment.

Amap Tech
Amap Tech
Amap Tech
Gaode’s Adoption of OceanBase: Architecture, Practices, and Future Roadmap

Gaode (AutoNavi) is China’s leading mobile map, navigation and real‑time traffic provider. To support billions of daily active users and massive data growth, Gaode has migrated critical services to OceanBase, a native distributed NewSQL database developed by Ant Group.

Background

Gaode’s core mission – “making travel and life better” – requires a highly available, low‑latency data layer that can handle real‑time positioning, map updates, financial settlement, cloud synchronization and user‑generated content. Traditional single‑node databases cannot meet the scale, consistency and cost requirements.

Why OceanBase?

OceanBase offers Paxos‑based multi‑replica consistency, dynamic horizontal scaling, high compression (up to 70% reduction), support for both OLTP and OLAP workloads, MySQL‑compatible protocol, and built‑in cloud‑native features such as containerized deployment and tenant isolation.

Key Technical Features

• Paxos replication : majority‑quorum writes guarantee strong consistency across regions. • LSM‑Tree storage engine : write‑optimized, column‑mixed storage with macro‑block/micro‑block hierarchy. • Multi‑replica types : full‑function, log‑only and read‑only replicas for flexible disaster recovery. • Distributed transaction : 2‑phase commit with GPS timestamps and Percolator model for low‑latency lock release. • Vectorized and parallel execution engine for analytical queries. • Various cache layers (Bloom filter, row, block, metadata) to accelerate reads.

Use Cases

1. Financial Settlement (strong consistency) – migrated from XDB to OceanBase with three‑datacenter deployment, MySQL driver compatibility, and zero‑downtime cut‑over. Result: sub‑millisecond write latency, 30% storage saving, and robust cross‑region disaster recovery.

2. Cloud Sync (massive multi‑point writes) – stores billions of device sync records. Adopted a three‑site, six‑direction OMS (OceanBase Migration Service) synchronization topology. Achieved 2‑3 ms average latency under 2.8 wtps write load and 24 wtps read load.

3. Evaluation System (central‑write, multi‑read) – read‑heavy service with 15 ms RT requirement. Used a primary‑backup cluster (Zhangbei primary, Shanghai & Shenzhen backups) with synchronous replication, achieving sub‑2 ms reads and seamless failover.

Architecture Choices

Gaode evaluated three deployment patterns:

• Multi‑point write – each region writes locally and synchronizes via OMS. Benefits: near‑zero network latency for users, true active‑active disaster recovery. • Central‑write, multi‑read – single primary with replicated read replicas. Benefits: simpler data consistency, lower write coordination cost. • Same‑city multi‑datacenter – primary‑backup within a city for lower latency but limited cross‑region resilience.

Decision criteria were read latency, cost, and disaster‑recovery requirements. For latency‑critical services (evaluation) Gaode chose central‑write; for massive write workloads (cloud sync) the multi‑point model was selected.

Best‑Practice Guidelines

• Partition key selection : use the most frequently queried column (e.g., appraiser_id) to avoid global index scans and reduce distributed transaction overhead.

partition by key(appraiser_id) partitions 512

• Index design : prefer local indexes that align with the partition key; limit global indexes to ≤4 and avoid them for high‑write paths.

PRIMARY KEY (`appraise_id`, `appraiser_id`), KEY `idx_appraiser_id_gmt_create` (`appraiser_id`, `gmt_create`) BLOCK_SIZE 16384 LOCAL, KEY `idx_targetid_gmt_create` (`appraise_target_id`, `gmt_create`) BLOCK_SIZE 16384 GLOBAL

• Replication tables for small, frequently joined dimension tables (e.g., city or category tables) to eliminate cross‑partition joins.

• Leader replica distribution : keep default placement unless heavy global‑index reads or distributed transactions cause bottlenecks.

• Serverless & Cloud‑Native deployment : OceanBase can run in containerized environments, supporting auto‑scaling and pay‑as‑you‑go resource models.

Operational Challenges & Solutions

• Java client timeout caused by synchronized connection handling – resolved by upgrading the client library. • Proxy connection limits – mitigated by horizontal scaling of OceanBase‑Proxy instances. • OMS lag leading to data loss on cut‑over – addressed by splitting OMS streams per table, enabling sub‑second replication, and pausing OMS during migration windows. • CPU spikes from suboptimal query plans – fixed by redesigning indexes to include the partition key and by upgrading to newer OceanBase versions with improved DESC sorting.

Future Roadmap

Gaode plans to extend OceanBase usage to:

• Massive structured data (e.g., user footprints) – target >50% cost reduction via compression and elastic scaling. • Massive unstructured data with schemaless columns, multi‑version support and column‑level TTL – aiming for 30‑40% cost savings. • Replace existing Analytic DB (ADB) with OceanBase’s upcoming pure column‑store engine for AP workloads, expecting ≥3× performance boost. • Explore OceanBase Serverless offering to further lower operational overhead and enable instant scaling.

Overall, Gaode’s experience demonstrates that a well‑designed OceanBase deployment can simultaneously achieve strong consistency, high throughput, low latency, and significant cost savings across a variety of real‑world workloads.

cloud nativescalabilityDistributed Databasedata compressionOceanBase
Amap Tech
Written by

Amap Tech

Official Amap technology account showcasing all of Amap's technical innovations.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.