LRT: Implicit Reasoning Chains Boost Speed and Accuracy by Removing Redundant Steps
Researchers introduce Latent Reasoning Tuning (LRT), a lightweight inference network that encodes explicit reasoning chains into fixed‑length latent vectors, eliminating thousands of decoding steps; experiments reveal substantial redundancy in traditional chains and demonstrate that LRT achieves faster, more accurate inference and outperforms existing efficient reasoning methods.
