Performance Optimization of Multi‑Modal Transfer Route Stitching in Ctrip Backend

This article analyzes the challenges of stitching multi‑modal transport routes in Ctrip's backend, identifies performance bottlenecks through monitoring, profiling and benchmarking, and presents a series of optimizations—including code refactoring, indexing, multi‑way merge, multi‑level caching, preprocessing, multithreading, lazy computation, and JVM tuning—that collectively reduce latency and resource consumption.

Ctrip Technology
Ctrip Technology
Ctrip Technology
Performance Optimization of Multi‑Modal Transfer Route Stitching in Ctrip Backend

Due to limited transport resources and high demand during peak periods, Ctrip often needs to generate multi‑modal transfer routes (train, plane, car, ship) by stitching two or more legs together, which creates an enormous combination space and requires real‑time data, leading to heavy CPU and I/O costs.

The optimization follows three core principles: treat performance improvement as a means, avoid premature or incorrect optimizations, and base every change on quantitative analysis using monitoring, profilers, and benchmarks.

Performance analysis added fine‑grained latency markers and used Async‑profiler, revealing that the combineTransferLines function (53.80% CPU) and the data‑query path ( querySegmentCacheable, 21.45%) dominate execution time, with the computeTripScore step (48.22%) heavily impacted by a custom StringUtils.format implementation that relies on String.replace (slow regex‑based replacement).

// Example of the original format function
public static String format(String template, Object... parameters) {
    for (int i = 0; i < parameters.length; i++) {
        template = template.replace("{" + i + "}", parameters[i] + "");
    }
    return template;
}

Refactoring replaces this with Apache Commons StringUtils.replace or, better, StringUtils.join for concatenation, dramatically reducing execution time.

// Optimized version using Apache replace
public static String format(String template, Object... parameters) {
    for (int i = 0; i < parameters.length; i++) {
        String temp = new StringBuilder().append('{').append(i).append('}').toString();
        template = org.apache.commons.lang3.StringUtils.replace(template, temp, String.valueOf(parameters[i]));
    }
    return template;
}

// Preferred usage
String result = StringUtils.join("_", aaaa, bbbb, cccc, dddd, eeee, ffff);

To cut the combinatorial explosion of time‑window checks, a red‑black tree index on departure timestamps (similar to a MySQL B+ tree) reduces the number of comparisons from tens of thousands to a few thousand per transfer city.

When selecting the top‑K candidate routes from many sorted queues, a multi‑way merge using a max‑heap achieves O(n log k) complexity instead of O(n log n) sorting.

A multi‑level cache architecture is introduced: static data (stations, regions) reside in an in‑memory HashMap, frequently accessed schedule data are cached in Redis, hot‑spot data are kept in an LFU in‑memory layer, and large assembled route results are persisted in RocksDB for reuse within a configurable TTL.

Additional engineering measures include offline preprocessing to prune low‑quality transfer cities, parallel stitching with a ForkJoinPool work‑stealing strategy, lazy construction of full route objects after filtering, and JVM tuning (G1 GC, increasing G1 region size to 16 MiB) to avoid premature promotion of large objects.

After applying these optimizations, the end‑to‑end stitching latency drops dramatically (see Figure 9), and the overall CPU profile shifts away from string formatting and data merging, confirming the effectiveness of the systematic performance‑first approach.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

BackendJavaPerformance Optimizationalgorithmcachingmultithreadingprofiling
Ctrip Technology
Written by

Ctrip Technology

Official Ctrip Technology account, sharing and discussing growth.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.