Backend Development 13 min read

Design and Implementation of a One‑Stop Fund Processing Platform

This article describes the design and implementation of a five‑layer one‑stop fund processing platform, covering architecture, rapid business support, hotspot data handling, storage optimization, and query improvements, including sharding, asynchronous processing, and dynamic data source strategies to ensure scalability and reliability.

Qunar Tech Salon
Qunar Tech Salon
Qunar Tech Salon
Design and Implementation of a One‑Stop Fund Processing Platform

1. System Design and Implementation

The fund processing platform is built as a five‑layer system: business demand side, access layer, accounting service layer, basic service layer, and data storage layer.

Business demand side is divided into five categories – operations, reconciliation platform, payment transactions, financial platform, and non‑payment‑center systems. Non‑payment‑center systems connect via an access system, while others use Dubbo or messaging.

The access layer handles system entry, protocol forwarding, and signature verification, applicable only to non‑payment‑center systems.

The accounting service layer provides core product services such as accounting, daily settlement, billing, clearing, reserve fund management, data center, data export, and monitoring.

The basic service layer offers foundational services for the accounting layer, including basic data query, internal and external query systems, dictionary service, configuration service, and Elasticsearch proxy.

The data storage layer stores massive data, employing multiple storage solutions that will be detailed later.

1.1.1. Architecture Design – Fund Processing Platform

Described above.

1.1.2. Reconciliation Platform Architecture

The reconciliation platform consists of four parts: data collection system, wide‑table system, reconciliation system, and discrepancy handling platform.

The data collection system gathers reconciliation files and stores the raw files in a unified HDFS repository, supporting both passive push and active pull (from local FTP, external FTP, bank front‑ends, and email).

The wide‑table system parses data and converts heterogeneous formats into a unified standard using an embedded JS parsing engine.

The reconciliation system performs the actual matching and drives downstream processes such as billing, clearing, and reserve fund rollover.

The discrepancy handling platform addresses errors from reconciliation results, e.g., order补单.

1.2. Rapid Business Support

Internet companies need to respond quickly to frequent product changes; the fund processing platform, being a low‑level system, must adapt to upstream variations.

Key challenges include handling multiple business scenarios, diverse fund sources, and varied account control rules.

Solutions introduced:

Parameterised, configurable clearing rules to support new business scenarios without code changes.

Configurable payment strategies to select different fund types per scenario.

1.3. Hotspot Data Solution

High‑traffic accounts generate up to 3 million records per day, with occasional spikes (e.g., 10 k+ transactions per minute during red‑packet distribution).

Initial architecture combined accounting and ledger systems, causing lock contention on hotspot accounts.

Solution: asynchronous redesign – split the monolith into separate accounting, ledger, and daily‑settlement services, allowing non‑real‑time accounts to be processed in batch.

For truly real‑time hotspot accounts, a sharding strategy hashes a large account into multiple sub‑accounts (balance‑balancing), presenting a single logical account externally.

1.4. Storage Optimization

After launch, the system faced massive data volume (30 million rows per day), long data retention (18 months), single‑instance storage limits (2.8 TB), and backup/recovery times exceeding 5 hours.

Solution 1 – Database Sharding: split the system into three databases (accounting, ledger, daily‑settlement) matching the three logical services.

Solution 2 – Table Partitioning: time‑based partitioning for large tables (e.g., monthly order tables, daily ledger tables) with each partition limited to ~100 million rows.

1.5. Query Optimization

Post‑sharding, query performance issues emerged: widespread impact of schema changes, poor index usage, and missing indexes.

Solution – Deploy an Elasticsearch cluster to mask differences across shards and tables, providing fast search for previously un‑indexed fields.

Data synchronization uses Canal + Kafka, consuming binlog from an offline replica to avoid performance impact on the primary.

A unified query service abstracts the underlying storage, supporting dynamic data source switching via AOP without invasive changes.

ES is wrapped to appear as a traditional database, enabling seamless migration from MyBatis‑based SQL queries to ES queries.

2. Postscript

The above provides a concise overview of a one‑stop fund processing platform, touching on finance‑related concepts such as sub‑accounts, accounting entries, and reserve funds. The platform supports multi‑institution, multi‑currency fund flows, linking virtual funds with real‑bank reserves, though many detailed business rules (e.g., overdue interest, pre‑payment handling) are omitted.

backend architecturescalabilityShardingDatabase OptimizationDynamic Data Sourceasynchronous processingfund processing
Qunar Tech Salon
Written by

Qunar Tech Salon

Qunar Tech Salon is a learning and exchange platform for Qunar engineers and industry peers. We share cutting-edge technology trends and topics, providing a free platform for mid-to-senior technical professionals to exchange and learn.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.