Elastic Distributed Training at Huya: Design, Implementation, and Results
This article describes Huya's elastic distributed training system, explaining why elasticity is needed, the architectural design using Kubernetes and ETCD, the dynamic scaling process, performance evaluations on ResNet‑50, and future improvements for more efficient and reliable AI model training.
Huya's AI platform evolved from a chaotic state before 2019 to a unified, cloud‑native Kubernetes‑based system that standardizes development, training, and inference workflows, improving resource utilization and reducing queue times.
Elastic distributed training was introduced to address three main challenges: (1) pronounced GPU usage peaks and idle resources during low‑traffic periods, (2) fragmented GPU resources across machines that prevent efficient task placement, and (3) training interruptions caused by machine failures.
The elastic design relies on ETCD for node registration, leader election, and watch mechanisms. Each node registers its IP, port, and GPU information, obtains peer node data, and assumes a rank role. Traditional RingAllReduce communication is then established for distributed training.
When a new node joins, all existing nodes pause after completing their current step, refresh the node list from ETCD, and resume training with the updated topology, enabling seamless scaling out. Conversely, nodes can be removed similarly, allowing automatic scaling in without manual intervention.
The platform runs on a Kubernetes cluster with a custom operator that manages training pod lifecycles. A Rendezvous component inside each pod interacts with ETCD, while a Remote Cache stores intermediate training data to support fault‑tolerant resumption of low‑priority jobs.
Performance tests on ResNet‑50 using ImageNet showed that elastic training achieves comparable accuracy to single‑node multi‑GPU training while significantly reducing total GPU‑hours by dynamically leveraging idle resources. The system also shortens overall training time and improves cost efficiency.
Future work aims to simplify code changes required for elasticity, enhance stability and fault‑tolerance, increase training efficiency, and open‑source more components to benefit the broader community.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.