Design and Optimization of a Ride‑Hailing Platform: Unified Fleet Integration and Concurrent Price Estimation
This article explains the origin and system design of a ride‑hailing platform, compares direct and aggregation models, defines coverage and performance requirements, and details a unified fleet onboarding process together with a thread‑pool based concurrent price‑estimation solution that uses caching, priority grouping, and circuit‑breaker protection to achieve scalable, reliable service.
Background : In 2020 the COVID‑19 pandemic reduced public transport usage, leading to increased demand for ride‑hailing services. The "Home Travel Platform" was created to safely guide users to dealerships and to bridge online quoting with offline car viewing.
System construction : Two business models were evaluated – a direct S2B2C ride‑hailing service and an aggregation S2B2C platform. The aggregation model was chosen for its lower cost, easier operation, and ability to leverage existing fleet providers while maintaining data security.
Platform requirements : Coverage of 100+ major cities, a minimum 60 % successful booking rate, and fast price estimation.
Unified fleet integration : A standardized onboarding workflow was designed, using design patterns (template method, adapter, decorator, factory, singleton, strategy) to abstract common functionality and customize differences between fleet providers and aggregation platforms.
Concurrent price estimation : To avoid performance degradation when multiple fleets are queried, a thread‑pool based solution was introduced. The pool reuses threads (JDK 1.8 ThreadPoolExecutor) and applies caching, priority grouping, and fallback mechanisms via Alibaba Sentinel.
/**
* Class Worker mainly maintains interrupt control state for
* threads running tasks, along with other minor bookkeeping.
* This class opportunistically extends AbstractQueuedSynchronizer
* to simplify acquiring and releasing a lock surrounding each
* task execution. This protects against interrupts that are
* intended to wake up a worker thread waiting for a task from
* instead interrupting a task being run. We implement a simple
* non-reentrant mutual exclusion lock rather than use
* ReentrantLock because we do not want worker tasks to be able to
* reacquire the lock when they invoke pool control methods like
* setCorePoolSize. Additionally, to suppress interrupts until
* the thread actually starts running tasks, we initialize lock
* state to a negative value, and clear it upon start (in
* runWorker).
*/Price estimation implementation (simplified):
/** 价格预估伪代码;**/
public JSONArray estimatePrice(EstimateQuery estimateQuery) {
/* .. */
/* 查询可用运力 */
List
channelIdList = channelMapper.getAllChannelIds();
/** 构建多个异步调用任务;**/
CompletableFuture[] cfs = channelIdList.stream().map(i -> CompletableFuture
.supplyAsync(() -> request(i), taskExecutor)
.whenComplete((u, e) -> {if (null != u){array.add(u);}})).toArray(CompletableFuture[]::new);
/** 开始执行任务,阻塞主线程,等待各任务全部处理完成; **/
CompletableFuture.allOf(cfs).join();
/** 后续处理 .. */
}
/**
* 调用运力对应的预估接口获取预估数据;
*/
private JSONObject request(int channelId) {
switch (channelId) {
case Constant.SQ_CHANNEL_ID:
/** 首汽预估;*/
return sqService.getEstimatePrice(estimateQuery);
case xx:
/* 其他预估;省略.. */
}
return null;
}Sentinel‑protected interface :
/**
* 通过 @SentinelResource 为运力的预估接口增加熔断机制,一旦接口达到熔断条件(访问超时占比超过设定阈值)则不在调用运力http接口,以提升性能;
* 通过 CONNECT_TIMEOUT、REQUEST_TIMEOUT 设置http接口请求的超时时间,接口超时直接放弃结果,以提升接口整体性能;
*/
@SentinelResource(value = "sqEstimate", blockHandler = "sqEstimateBlockHandler", fallback = "orderFallback")
public JSONObject getEstimatePrice(EstimateQuery estimateQuery) {
/** 调用接口数据请求 **/
Map
requestMap = this.convertData(estimateQuery);
/** 调用接口请求数据,通过 CONNECT_TIMEOUT,REQUEST_TIMEOUT限制请求时间 **/
String result = HttpUtils.post(this.REQUEST_URL, CONNECT_TIMEOUT, REQUEST_TIMEOUT);
/** 对结果数据进行封装处理 **/
return this.convertResult(result);
}Task data structure used for grouping fleet calls:
public class ChannelTaskNode {
private boolean header; // 头节点
private int priority; // 优先级
private int groupIndex; // group
private Long channelId; // 运力ID
private ChannelTaskNode next; // 下一个节点
}Task initialization and grouping (simplified):
// 每次预估对应的任务分组
ChannelTaskNode[] channelTaskArray = new ChannelTaskNode[maxThread];
// 按照优先级获取对应的运力列表;
int[] channelArray = queryAllOpenChannelSortedByPriority();
for (int i = 0; i < channelArray.length; i++) {
int idx = i % maxThread;
ChannelTaskNode node = channelTaskArray[idx];
if (null != node) {
node.next = new ChannelTaskNode(channelArray[i], null, priority, false);
} else {
channelTaskArray[i] = new ChannelTaskNode(channelArray[i], null, priority, true);
}
}Composing and executing the chained tasks :
/** 对任务进行拼接 */
CompletableFuture[] allTasks = new CompletableFuture[channelTaskArray.length];
for (ChannelTaskNode node : channelTaskArray) {
ChannelTaskNode temp = node;
CompletableFuture tempFuture = null;
while (null != temp) {
long channelId = temp.getChannelId();
if (temp.isHeader()) {
tempFuture = CompletableFuture.supplyAsync(() -> service.estimatePrice(channelId));
} else {
tempFuture = tempFuture.handle((obj, error) -> {
if (null != error) {
return service.estimatePrice(channelId);
}
return obj;
});
}
temp = temp.next;
}
allTasks[node.getGroupIndex()] = tempFuture;
}
/** 开始调用 */
CompletableFuture.allOf(allTasks).join();
/** 对结果进行处理 */
// 省略Performance results : Benchmarks (shown in the original figures) demonstrate that the multi‑threaded, prioritized approach keeps response time within acceptable limits even as the number of queried fleets grows, confirming that the solution scales without becoming a bottleneck.
Conclusion : The article details the origin, architectural decisions, and optimization techniques of the Home Travel Platform, showing how a unified fleet onboarding process, thread‑pool based concurrency, caching, priority grouping, and circuit‑breaker protection enable a scalable, reliable price‑estimation service for an aggregation‑type ride‑hailing system.
HomeTech
HomeTech tech sharing
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.