DataFunSummit
DataFunSummit
Mar 21, 2026 · Artificial Intelligence

How Slidebatching Revolutionizes LLM Inference Scheduling for Faster, More Efficient AI Services

The article examines the memory and latency challenges of 1750‑billion‑parameter LLM inference, introduces the xLLM framework’s Slidebatching and PD‑separation scheduling strategies, and details how these techniques achieve up to 35% system‑throughput gains and 52% SLO compliance improvements in real‑world multi‑priority workloads.

AI performanceLLMPD separation
0 likes · 15 min read
How Slidebatching Revolutionizes LLM Inference Scheduling for Faster, More Efficient AI Services
Tencent Technical Engineering
Tencent Technical Engineering
Jan 23, 2026 · Artificial Intelligence

Unlocking AI Infra: Distributed Inference, PD Separation, TileLang, and Next‑Gen Agent Infrastructure

This article surveys the 2025 AI infrastructure landscape, covering distributed inference with PD‑separation, dynamic DOPD scheduling, AFD attention‑FFN disaggregation, high‑bandwidth cross‑machine communication libraries, the TileLang programming model, RL train‑inference decoupling via SeamlessFlow, and secure, low‑latency agent infra designs for future large‑scale models.

AI infrastructureDistributed inferenceGPU communication
0 likes · 27 min read
Unlocking AI Infra: Distributed Inference, PD Separation, TileLang, and Next‑Gen Agent Infrastructure