Technical Architecture and Performance Optimization of the Intelligent Focus Auto‑Placement Platform

This article presents the design, implementation challenges, and optimization strategies of an intelligent auto‑placement platform, covering background analysis, distributed task scheduling with XXL‑Job, sharding with ShardingSphere, caching via Caffeine, message‑queue integration using Kafka, and the resulting performance gains.

HomeTech
HomeTech
HomeTech
Technical Architecture and Performance Optimization of the Intelligent Focus Auto‑Placement Platform

The Intelligent Focus platform, built on the automotive news portal, aims to improve car‑related content delivery by matching content with audience segments through an automated advertising system.

Background : Leveraging the portal’s massive daily traffic, the project seeks to create a high‑efficiency product that serves users interested in buying, viewing, and using cars.

Initial Solution : The early design outlined the overall workflow, including audience‑package creation, AI‑generated articles (AGC), scheduling, and multi‑channel ad placement.

Problems & Challenges : The end‑to‑end pipeline involved six systems and nine steps, leading to scheduling complexity, data‑storage concerns for daily production of ~660 w cards, and long processing times (over 4 hours for a full day’s workload).

Implementation Plan :

Adopted xxl‑job as a lightweight distributed timing‑task framework to orchestrate the workflow.

Introduced ShardingSphere for horizontal table partitioning based on creation date, avoiding full‑table performance bottlenecks.

Selected Caffeine as a local cache to replace frequent MySQL/Redis reads, reducing I/O latency.

Integrated Kafka as the asynchronous message queue for creative delivery, decoupling the DSP interface and improving throughput.

Technical Comparisons : Evaluated elastic‑job vs. xxl‑job (choosing xxl‑job for simplicity), ShardingSphere vs. Mycat (favoring ShardingSphere for horizontal sharding), and Kafka vs. RabbitMQ vs. RocketMQ (selecting Kafka for existing DSP compatibility).

Results : Production time for 660 w items dropped from 4 hours to 1.4 hours, achieving a 2.8× speedup (7.7 w/min). The platform now supports 100+ car‑model tasks and 140 ad‑slot specifications, with overall efficiency improvements exceeding 20 %.

Conclusion : By combining distributed scheduling, sharding, local caching, and asynchronous messaging, the system meets its original goals of high‑efficiency, scalable auto‑placement for automotive content.

References : Official documentation for ShardingSphere, Mycat, Elastic‑Job, XXL‑Job, Caffeine, and related architectural guides.

backend architecturedistributed schedulingsharding
HomeTech
Written by

HomeTech

HomeTech tech sharing

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.