Machine Heart
Machine Heart
Apr 12, 2026 · Artificial Intelligence

LRT: Implicit Reasoning Chains Boost Speed and Accuracy by Removing Redundant Steps

Researchers introduce Latent Reasoning Tuning (LRT), a lightweight inference network that encodes explicit reasoning chains into fixed‑length latent vectors, eliminating thousands of decoding steps; experiments reveal substantial redundancy in traditional chains and demonstrate that LRT achieves faster, more accurate inference and outperforms existing efficient reasoning methods.

DeepSeekEfficient InferenceHybrid Reasoning
0 likes · 10 min read
LRT: Implicit Reasoning Chains Boost Speed and Accuracy by Removing Redundant Steps
Machine Heart
Machine Heart
Apr 2, 2026 · Artificial Intelligence

ColaVLA Demonstrates Autonomous Driving Models Can Reason Without Text

ColaVLA replaces explicit text‑based reasoning with latent‑space inference and a hierarchical parallel planner, achieving lower trajectory error, reduced collision rates and up to ten‑fold faster inference while preserving safety and real‑time performance in autonomous driving benchmarks.

autonomous drivinghierarchical planninglarge language models
0 likes · 11 min read
ColaVLA Demonstrates Autonomous Driving Models Can Reason Without Text
Tencent Advertising Technology
Tencent Advertising Technology
Nov 20, 2025 · Artificial Intelligence

CoderRec: Latent Reasoning Boosts Sequential Recommendation

CoderRec, a new sequential recommendation framework jointly developed by Tencent Advertising Technology and Tsinghua University, combines domain‑specific latent reasoning with cross‑scale model collaboration to capture implicit user intent and fuse large‑language‑model semantics with traditional recommender signals, achieving state‑of‑the‑art performance on multiple Amazon datasets.

Artificial IntelligenceRecommender Systemscross-scale collaboration
0 likes · 17 min read
CoderRec: Latent Reasoning Boosts Sequential Recommendation
HyperAI Super Neural
HyperAI Super Neural
Sep 30, 2025 · Artificial Intelligence

OnePiece: Applying LLM‑Style Reasoning to Item‑ID Sequences for Generative Recommendation

The article presents the OnePiece framework, which injects LLM‑style context engineering and latent reasoning into item‑ID based search‑and‑recommendation models, details the design choices, training tricks, attention analysis, and reports online gains of around 1% GMV and ad revenue, offering a thorough technical dissection of generative recommendation in industrial settings.

Context EngineeringGenerative RecommendationLLM Reasoning
0 likes · 31 min read
OnePiece: Applying LLM‑Style Reasoning to Item‑ID Sequences for Generative Recommendation