Large-Scale Offline‑Online Mixed Deployment at Huya: Architecture, Challenges, and Solutions
This article describes Huya's large‑scale offline‑online mixed deployment, detailing the low resource‑utilization problems, the time‑sharing and elastic scheduling solutions, the containerized architecture, multi‑datacenter isolation, heterogeneous resource handling, stability safeguards, and the resulting performance improvements and future directions.
Huya's data platform faces two major issues: low resource utilization (online services average 17% CPU, offline 55%) and untimely resource delivery, especially during peak online events or urgent offline queries.
To address these, Huya adopts a time‑sharing strategy that flattens resource usage across the day, and builds an elastic scheduling layer that dynamically reallocates resources between online and offline workloads while separating storage and compute.
The containerization solution ensures that any unused resources on the host are offered to offline jobs, avoiding the fragmentation seen in declarative pod requests. A dynamic resource‑layer monitors metrics from the Neo monitoring system and adjusts YARN allocations in real time.
Network isolation is achieved with QoS throttling between data centers and a dedicated VPC gateway for cloud‑based offline workloads, preventing interference with online traffic.
Additional techniques include disk reuse on legacy machines, bandwidth sharing between HDFS and YARN, and a hybrid scheduling policy that oversells CPU/memory while respecting label constraints.
Stability is protected by binding HDFS to specific CPUs, implementing QoS for bandwidth, and using a scoring system to kill the least critical pod processes during overload, avoiding OOM kills.
After deployment, average CPU utilization rose from 17% to 51%, delivering a 200% increase, with 65% of resources now coming from the mixed deployment and achieving over 40% cost savings for both online and offline workloads.
Future work aims to smooth utilization fluctuations, extend mixed deployment to edge data centers, and further improve resource stability.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.