Artificial Intelligence 16 min read

Improving the Mathematical Reasoning Ability of Large Language Models: Overview, Mixed Instructions, Synthetic Data, and Training Optimization

This article presents a comprehensive approach to enhancing large language models' mathematical reasoning by reviewing model architectures, introducing mixed CoT‑PoT instructions, generating and filtering synthetic data, and applying multi‑stage training optimizations such as RFT, PPO, and DPO, with detailed experimental results and Q&A.

DataFunTalk
DataFunTalk
DataFunTalk
Improving the Mathematical Reasoning Ability of Large Language Models: Overview, Mixed Instructions, Synthetic Data, and Training Optimization

Overview – The talk focuses on boosting the mathematical reasoning capability of large language models (LLMs) as a key indicator of general intelligence, treating it as a universal skill rather than a separate task.

Large Model Overview – Current mainstream LLMs (GPT‑3, Bloom, GLM, LLaMA, Baichuan, Qwen, etc.) share a transformer‑based architecture with varying parameter scales (from 6B to 175B). Typical configurations include vocabulary size, number of transformer layers, multi‑head attention, and feed‑forward networks. Optimization techniques such as SparseAttention, FlashAttention, MAQ, and GQA are commonly used.

Training Pipeline – Standard LLM training consists of four stages: (1) pre‑training on massive token corpora using thousands of GPUs, (2) supervised fine‑tuning (SFT) for instruction alignment, (3) reward‑model training, and (4) reinforcement learning with human feedback (RLHF). Similar pipelines are applied to LLaMA‑2‑Chat and other models.

Mathematical Reasoning Optimization Process – The workflow is divided into data construction, data filtering, model building, and training/optimization. Data are categorized into mixed instructions (CoT + PoT) and synthetic data, with quality and diversity filtering guided by reward and critique models.

Mixed Instructions – Problems are split into logical reasoning (handled by Chain‑of‑Thought) and computational parts (handled by Program‑of‑Thought). This hybrid approach leverages the strengths of each method and mitigates their weaknesses.

Synthetic Data – Due to the scarcity of high‑quality math instruction data, self‑instruct pipelines are used to expand seed tasks across sub‑domains (matrix operations, calculus, equations, etc.). Quality is ensured through simple similarity metrics (LCS, Jaccard) and rigorous reward/critique scoring.

Critique Model – A language‑model‑based evaluator that scores both instructions and answers. It is trained on data where GPT‑4o provides reference answers and scores, achieving ~84.8% accuracy, slightly below GPT‑4o’s 85.9%.

Training Optimization (RFT, PPO, DPO) – RFT (Refusal‑Free Training) uses small models (e.g., 7B LLaMA) to generate diverse reasoning paths, which are filtered for quality and diversity before being used to fine‑tune larger models. PPO and DPO are applied on top of RFT data; DPO shows modest gains (17% win rate) while PPO yields more stable improvements.

Experimental Results – On a held‑out test set, SFT achieves 71% accuracy, RFT improves to 77%, while DPO adds limited further gains. Hard samples (low critique scores) benefit from dynamic loss weighting.

Q&A Highlights – Discussed differences between PPO and DPO, the role of synthetic data in complex reasoning, and model size considerations for reward versus critique models.

ailarge language modelsReward ModelTraining Optimizationsynthetic datamath reasoning
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.