Task‑Aware Decoding (TaD): A Plug‑and‑Play Method to Mitigate Hallucinations in Large Language Models
This article presents Task‑aware Decoding (TaD), a plug‑and‑play technique introduced by JD Tech and Tsinghua University and accepted at IJCAI 2024, which reduces intrinsic hallucinations in large language models by comparing pre‑ and post‑fine‑tuning outputs, and demonstrates its effectiveness combined with Retrieval‑Augmented Generation across various tasks.