Baobao Algorithm Notes
Baobao Algorithm Notes
May 26, 2025 · Artificial Intelligence

When Should Large Language Models Think? 10 Cutting‑Edge Strategies to Boost Reasoning Efficiency

This article reviews ten recent papers that tackle the over‑thinking problem in large language models by shortening chain‑of‑thought reasoning, introducing dynamic early‑exit, adaptive thinking triggers, and reinforcement‑learning‑based training, showing how models can maintain or improve accuracy while dramatically reducing token usage and latency.

AI researchadaptive inferencechain-of-thought
0 likes · 38 min read
When Should Large Language Models Think? 10 Cutting‑Edge Strategies to Boost Reasoning Efficiency
Xiaohongshu Tech REDtech
Xiaohongshu Tech REDtech
Jan 12, 2024 · Artificial Intelligence

Negative Sample Assisted Distillation for Large Language Models

The AAAI‑2024 paper introduces a Negative Sample Assisted Distillation framework—comprising Negative Assistance Training, Negative Calibration Enhancement, and Adaptive Self‑Consistency—that leverages both correct and incorrect reasoning examples to train a compact LLaMA‑7B student, achieving up to 75.75 % accuracy gains over fine‑tuning on MATH and improving out‑of‑domain benchmarks.

LLMchain-of-thoughtknowledge distillation
0 likes · 13 min read
Negative Sample Assisted Distillation for Large Language Models