Performance Optimization and Tuning of Apache Doris Vectorized Version for Xiaomi's A/B Experiment Platform
Xiaomi upgraded its Apache Doris from version 0.13 to the vectorized 1.1.2 release for its A/B experiment platform, conducting extensive single‑SQL and concurrent tests, identifying CPU, memory, and fragment timeout issues, and applying tuning such as memory decommit settings, string matching improvements, and patches to achieve up to 5× query speed gains and enhanced stability.
Since September 2019, Xiaomi has widely deployed Apache Doris for near‑real‑time, multi‑dimensional analytics across dozens of internal services. As business growth increased query performance demands, the community‑released 1.1 version with full vectorization became essential for the A/B experiment platform.
The team built a test cluster matching the production 0.13 configuration (3 FE + 89 BE, each BE with dual Intel Xeon Silver 4216 CPUs, 256 GB RAM, 7.3 TB HDD) to compare Doris 1.1.2 against the live 0.13 cluster.
Single‑SQL Serial Query Test
Seven typical A/B experiment queries were run over 1‑day, 7‑day, and 20‑day partitions (~31 billion rows, ~2 TB). Results showed the vectorized version delivering 3–5× faster query times.
Concurrent Query Test
When submitting the same workload concurrently, Doris 1.1.2 exhibited higher latency, CPU usage capped at ~50 % (versus ~100 % on 0.13), frequent RPC timeouts, and numerous query errors.
Key issues identified:
CPU usage limited due to lock contention in TCMalloc’s page‑heap allocation.
Frequent fragment‑send RPC timeouts.
Slow LIKE queries using std::search() .
Excessive memory copies during string column processing.
Optimization Practices
Increase CPU Utilization : Disabled aggressive_memory_decommit (set to false) to keep memory cached in PageHeap, raising CPU usage to near 100 %.
Fragment Timeout Fix : Applied community PRs (e.g., #12427 ) adding timeout‑wake‑up for sleeping fragment threads, preventing thread‑pool exhaustion.
LIKE Performance : Replaced the default std::search() with std::strstr() , achieving roughly a 2× speedup for string pattern matching.
Memory Copy Reduction : Implemented pre‑allocation of PODArray space based on early rows to avoid repeated Resize calls. The estimation formula used was: required_total_size = (current_total_size / m) * n where m is the sampled row count and n the total rows.
Additional patches reduced Thrift size of execution plans and introduced pooled RPC stubs.
Post‑Tuning Test Results
After applying the above changes, the vectorized Doris 1.1.2 cluster achieved:
~48–52 % lower average latency and ~49–53 % lower P95 latency in Test 1 and Test 2.
4–6× overall query speed improvement in a user‑behavior analysis workload (Test 3).
CPU utilization consistently near 100 % and stable query execution without the previous RPC timeouts.
These results confirmed that the tuned vectorized version meets Xiaomi’s production requirements for the A/B experiment platform.
In conclusion, the collaboration between Xiaomi, SelectDB, and the Apache Doris community successfully delivered a high‑performance, stable vectorized database solution, with lessons applicable to other large‑scale analytical workloads.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.