Artificial Intelligence 16 min read

Optimizing Distributed Cache for Large-Scale Deep Learning Training with Alluxio and SiloD

This article examines the storage bottlenecks in large‑scale AI training, evaluates local‑disk and Alluxio‑based distributed caching strategies, proposes uniform cache eviction and replica‑aware global policies, and introduces the SiloD framework for coordinated compute‑storage scheduling to dramatically improve GPU utilization and overall cluster throughput.

DataFunTalk
DataFunTalk
DataFunTalk
Optimizing Distributed Cache for Large-Scale Deep Learning Training with Alluxio and SiloD

Recent advances in deep learning have made AI training workloads increasingly data‑intensive, exposing I/O bottlenecks when GPUs wait for remote storage. The article first outlines the typical compute‑storage separation architecture, the limited local SSD capacity on GPU nodes, and the resulting under‑utilization of expensive accelerators.

Two main caching approaches are discussed. The first relies on local disk caches within Docker containers, which reduces remote reads but suffers from limited space and duplicate data across jobs. The second leverages Alluxio as a distributed cache, enabling data sharing across nodes and jobs, but requires careful cache eviction and bandwidth allocation.

The authors argue that traditional LRU or LFU eviction policies are ill‑suited for AI training because data access patterns are shuffled each epoch, eliminating temporal and spatial locality. They propose a uniform eviction strategy that never evicts cached items during a job, achieving higher hit rates.

To address the interplay between cache size and remote bandwidth, the article highlights the need for coordinated scheduling of compute, storage, and network resources. It presents a formula for job throughput based on cache size, dataset size, and bandwidth, and shows how differentiating cache allocation per job can maximize performance.

The SiloD framework, presented at EuroSys 2023, integrates compute‑resource scheduling (K8s, YARN) with storage‑resource management, using Alluxio for cache and a global replica‑aware eviction policy. Workers report replica information to a master, enabling weighted eviction decisions that consider both recency and replica count.

Experimental results indicate that the combined SiloD scheduling and optimized caching can improve cluster utilization and throughput by up to eight times, reducing GPU idle time and overall job latency.

In summary, the article recommends using uniform cache eviction for AI training, treating cache and network bandwidth as jointly scheduled resources, and extending existing schedulers with a global, replica‑aware cache management layer such as SiloD.

distributed cacheResource SchedulingAlluxioAI trainingCache EvictionSiloD
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.