How Researchers Made Large Language Models Forget or Amplify Specific Concepts
A new study from Meta and NYU reveals a two‑step technique—SAMD to locate concept‑specific attention heads and SAMI to scale their influence—enabling precise, low‑cost editing of transformer models for tasks ranging from factual recall to safety control.
