PaperAgent
PaperAgent
May 9, 2026 · Artificial Intelligence

How ActDistill Slashes Deployment Costs of VLA Large Models

ActDistill, proposed by Tongji University and collaborators, reduces the inference latency, compute consumption, and action-loop speed of Vision‑Language‑Action (VLA) models by selectively distilling action‑relevant knowledge, achieving up to 1.67× speedup while preserving control quality on real robot hardware.

ActDistillVLAdynamic routing
0 likes · 13 min read
How ActDistill Slashes Deployment Costs of VLA Large Models