DeepHub IMBA
DeepHub IMBA
Mar 8, 2026 · Artificial Intelligence

MIT Study: How Self‑Generated History Pollutes LLM Context and Degrades Multi‑Turn Chats

An MIT paper reveals that storing a language model’s own prior replies—known as context pollution—significantly lengthens the dialogue context while offering little quality benefit, with up to a ten‑fold reduction in tokens and comparable responses for about 70% of turns, especially in open‑source models.

AI agentsLLMMIT study
0 likes · 11 min read
MIT Study: How Self‑Generated History Pollutes LLM Context and Degrades Multi‑Turn Chats