UniLS: End-to-End Audio-Driven Framework Eliminates the ‘Poker Face’ in Digital Human Dialogue
UniLS, the first end‑to‑end audio‑driven framework that jointly generates speaking and listening facial motions for digital humans, achieves state‑of‑the‑art speaking accuracy, improves listening naturalness by 44.1 %, and runs at over 500 FPS, as demonstrated on the CVPR 2026‑accepted paper with extensive quantitative and user studies.
