Optimizing Bilibili Presto Cluster Query Performance with Alluxio and Local Cache
This article presents a comprehensive technical overview of Bilibili's Presto cluster architecture, the challenges of query performance on Hadoop, and the systematic optimizations—including Alluxio integration, local cache mechanisms, multi‑active coordinators, label‑based scheduling, and real‑time penalties—that together improve availability, stability, and latency for large‑scale analytics workloads.
Introduction – The session introduces the theme of optimizing Bilibili's Presto cluster query performance, briefly covering Presto fundamentals and the internal architecture of Bilibili's Presto deployment.
Cluster Architecture – Bilibili's Presto clusters span two IDC sites, with a Dispatcher routing queries based on data size and engine load. A custom Coral service translates Hive/Spark SQL to Presto syntax, and Ranger provides unified permission control across HDFS, Hive, Presto, and Spark.
Presto Overview – Presto, originally open‑sourced by Facebook in 2013, is a distributed ANSI‑SQL engine designed for OLAP. It follows a coordinator‑worker model where the coordinator parses SQL, generates logical plans, splits them into stages and tasks, and distributes tasks to workers via a scheduler.
Key Optimizations
Availability: Multi‑active coordinator redesign eliminates single‑point failures.
Stability: Label‑based resource isolation and real‑time penalty mechanisms prevent large queries from starving latency‑critical workloads.
Performance: Query limits in Presto‑Gateway block abusive or overly large HDFS scans.
Alluxio Integration – To mitigate network overhead and RPC latency of the compute‑storage separation, Alluxio is introduced as a caching layer. Hot‑data tags are generated from query lineage, and an Alluxio worker serves cached data before falling back to HDFS. The Hive connector is modified to recognize Alluxio tags and rewrite schemes accordingly.
Local Cache Enhancements – A lightweight Alluxio instance is embedded directly into Presto workers (local mode). Improvements include:
Hive meta‑cache with versioning.
File‑list, fragment‑result, and ORC/Parquet footer caches.
Soft‑affinity scheduling (hash‑mod and consistent hashing with virtual nodes) to improve cache locality.
Support for multiple disks using available‑space‑based probabilistic placement.
Robust startup and failure handling for local cache restoration.
Results – After deployment, ~30% of BI workloads use Alluxio, caching ~20 w partitions (~45 TB). Query latency improves by ~20% compared to pure HDFS, and cache hit rates reach ~40% across three Presto clusters.
Future Work – Plans include expanding local‑cache mode to more clusters, adding TextFile format support, implementing disk health detection, refining soft‑affinity to balance load, and excluding nodes without cache from scheduling.
Q&A Highlights
Cross‑IDC query routing is achieved via Presto‑Gateway analysis of table locations and data volume.
Local cache does not require a separate Alluxio cluster; it runs as a jar inside Presto.
Cache hit rate is measured by counting queries that hit Alluxio versus total queries.
High‑concurrency performance issues stem from SSD I/O limits.
Local cache memory footprint is low because it caches at page granularity.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.