🤖 AI Summary
Large language models are prone to memorizing sensitive, copyrighted, or harmful content, raising significant privacy and legal concerns. Existing unlearning methods struggle to balance effective removal of such knowledge with preservation of overall model utility. This work proposes Attention Smoothing Unlearning (ASU), a novel framework that formulates unlearning as an attention-based self-distillation optimization process. By increasing the Softmax temperature to smooth attention distributions, ASU weakens both lexical and semantic associations, thereby directly erasing target memorized information while maintaining coherent responses to unlearning prompts. Evaluated on benchmarks including TOFU, MUSE, and WMDP—as well as real-world scenarios—ASU substantially outperforms current approaches, achieving highly effective unlearning with negligible degradation in model performance.
📝 Abstract
Large Language Models are prone to memorizing sensitive, copyrighted, or hazardous content, posing significant privacy and legal concerns. Retraining from scratch is computationally infeasible, whereas current unlearning methods exhibit unstable trade-offs between forgetting and utility, frequently producing incoherent outputs on forget prompts and failing to generalize due to the persistence of lexical-level and semantic-level associations in attention. We propose Attention Smoothing Unlearning (ASU), a principled framework that casts unlearning as self-distillation from a forget-teacher derived from the model's own attention. By increasing the softmax temperature, ASU flattens attention distributions and directly suppresses the lexical-level and semantic-level associations responsible for reconstructing memorized knowledge. This results in a bounded optimization objective that erases factual information yet maintains coherence in responses to forget prompts. Empirical evaluation on TOFU, MUSE, and WMDP, along with real-world and continual unlearning scenarios across question answering and text completion, demonstrates that ASU outperforms the baselines for most unlearning scenarios, delivering robust unlearning with minimal loss of model utility.