Artificial Intelligence 7 min read

Critique Fine-Tuning (CFT): Boosting Large Language Model Reasoning with Minimal Data

The paper introduces Critique Fine-Tuning (CFT), a method that replaces simple imitation in supervised fine‑tuning with critique‑based learning, achieving superior reasoning performance on mathematical benchmarks using only 50 K samples, outperforming traditional reinforcement‑learning approaches that require millions of examples.

DataFunTalk
DataFunTalk
DataFunTalk
Critique Fine-Tuning (CFT): Boosting Large Language Model Reasoning with Minimal Data

Supervised fine‑tuning (SFT) traditionally trains large language models (LLMs) to imitate high‑quality human or synthetic responses, but its effectiveness plateaus as dataset size grows, especially for already strong base models.

A recent study by researchers from CMU, Waterloo, and other institutions proposes Critique Fine‑Tuning (CFT), which shifts the training focus from direct imitation to learning from critiques of erroneous answers. The CFT dataset contains 50 K question‑answer pairs paired with model‑generated criticisms, primarily covering mathematics (65%) and also including physics, chemistry, and business topics.

During training, the model receives the concatenated problem (x) and wrong response (y) as input and is optimized to generate a critique (c). Experiments with 7‑B base models such as DeepSeekMath‑base, Qwen2.5, and Qwen2.5‑Math show that CFT consistently outperforms the best SFT variants, achieving 4‑10 percentage‑point higher accuracy on math‑focused benchmarks while converging faster with far fewer training steps.

Notably, Qwen2.5‑Math‑7B‑CFT attains an average performance of 48.1%, surpassing much larger models like Llama‑3.1‑70B‑Instruct (40.4%) and approaching the 56.4% of the 72‑B Qwen2.5‑Math‑Instruct, all while using dramatically less GPU time.

Limitations identified include the fact that the gold‑standard critiques are themselves generated by LLMs, with about 20% containing errors, and that current CFT models cannot perform self‑critique or self‑improvement. The dataset also focuses mainly on mathematical reasoning, leaving open questions about applicability to programming, scientific, or humanities tasks.

Future directions suggest improving critique quality through human verification, extending CFT to multimodal and broader domains, and combining CFT with other paradigms such as reinforcement learning or self‑correction to enable autonomous model refinement.

Large Language ModelsSupervised Fine-tuningTraining EfficiencyAI reasoningCritique Fine-TuningMathematical Benchmarks
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.