Machine Learning Algorithms & Natural Language Processing
Machine Learning Algorithms & Natural Language Processing
Apr 10, 2026 · Artificial Intelligence

Agent-Dice: Geometric Consensus Filtering Beats Catastrophic Forgetting in LLM Agents

Agent-Dice introduces a geometric consensus filtering and curvature‑based importance weighting framework that disentangles knowledge updates, preventing catastrophic forgetting in large‑language‑model agents while enhancing plasticity, and demonstrates superior stability‑plasticity trade‑offs on GUI and tool‑use benchmarks across multiple base models.

AgentCatastrophic ForgettingContinual Learning
0 likes · 8 min read
Agent-Dice: Geometric Consensus Filtering Beats Catastrophic Forgetting in LLM Agents
Machine Learning Algorithms & Natural Language Processing
Machine Learning Algorithms & Natural Language Processing
Feb 22, 2026 · Artificial Intelligence

What Is On-Policy Distillation? A Deep Dive into On-Policy and Self-Distillation

The article explains On-Policy Distillation, derives its forward and reverse KL gradients, introduces Self‑Distillation where the policy serves as its own teacher, discusses practical implementation tricks such as extra‑knowledge injection, EMA or trust‑region teacher stabilization, and highlights benefits like reduced catastrophic forgetting, fewer Aha moments, and a narrower train‑test gap, especially for larger models.

Catastrophic ForgettingEMAKL Divergence
0 likes · 6 min read
What Is On-Policy Distillation? A Deep Dive into On-Policy and Self-Distillation
AI Frontier Lectures
AI Frontier Lectures
Jan 12, 2026 · Artificial Intelligence

How GraphKeeper Tackles Catastrophic Forgetting in Domain‑Incremental Graph Learning

This article analyzes the GraphKeeper framework, which combines multi‑domain graph decoupling, unbiased ridge‑regression knowledge preservation, and a domain‑aware distribution discriminator to overcome catastrophic forgetting in domain‑incremental graph neural network training, and validates its superiority through extensive experiments and ablations.

Catastrophic ForgettingDomain Incremental LearningGraphKeeper
0 likes · 15 min read
How GraphKeeper Tackles Catastrophic Forgetting in Domain‑Incremental Graph Learning
Baobao Algorithm Notes
Baobao Algorithm Notes
Dec 7, 2025 · Artificial Intelligence

Can RL Really Boost LLM Reasoning? A Critical Review of Recent Findings

This article critically examines recent RL‑for‑LLM studies, revealing that reinforcement learning improves search efficiency but does not extend the intrinsic reasoning capabilities of base models, and explores the underlying model‑conditioned optimization bias, comparisons with SFT distillation, and the trade‑off with catastrophic forgetting.

Catastrophic ForgettingLLMModel Optimization
0 likes · 11 min read
Can RL Really Boost LLM Reasoning? A Critical Review of Recent Findings
Baobao Algorithm Notes
Baobao Algorithm Notes
Nov 20, 2025 · Artificial Intelligence

Why Reinforcement Learning Preserves LLM Generality Better Than Supervised Fine‑Tuning

The article analyzes why reinforcement learning (RL) fine‑tuning retains a large language model's general abilities better than supervised fine‑tuning (SFT), explaining the off‑policy distribution shift of SFT and the on‑policy data consistency, KL penalty, and trust‑region mechanisms that give RL its anti‑forgetting properties.

Catastrophic ForgettingLLMOn-Policy Data
0 likes · 8 min read
Why Reinforcement Learning Preserves LLM Generality Better Than Supervised Fine‑Tuning
AsiaInfo Technology: New Tech Exploration
AsiaInfo Technology: New Tech Exploration
Feb 24, 2025 · Artificial Intelligence

Can Multi‑Teacher Distillation Overcome Catastrophic Forgetting in Continual Learning?

This paper proposes a multi‑teacher distillation framework for continual learning that combines active data rehearsal with feature‑decoupled distillation, demonstrating superior performance on PASCAL VOC and COCO benchmarks while mitigating catastrophic forgetting and balancing stability‑plasticity trade‑offs.

AICatastrophic ForgettingContinual Learning
0 likes · 12 min read
Can Multi‑Teacher Distillation Overcome Catastrophic Forgetting in Continual Learning?
Baobao Algorithm Notes
Baobao Algorithm Notes
Oct 25, 2024 · Artificial Intelligence

How to Use Importance Sampling for Effective Continue Pretraining of LLMs

Continuing pretraining (CP) bridges pretraining and SFT to inject domain knowledge, but faces catastrophic forgetting; this article explores leveraging importance sampling to balance common and domain data, discusses data selection, annealing strategies, and practical tips for mitigating forgetting while enhancing specialized capabilities.

Catastrophic ForgettingContinue PretrainingDomain Adaptation
0 likes · 8 min read
How to Use Importance Sampling for Effective Continue Pretraining of LLMs