π€ AI Summary
This paper addresses *selective forgetting*βthe targeted removal of residual influence from sensitive or deprecated dataβin large language models (LLMs) without full retraining, balancing privacy compliance and model utility preservation. We propose the *Marginal Information Regularization (MIR)* framework, the first to rigorously formalize and eliminate *only* the marginal information introduced by the to-be-forgotten data, grounded in information theory; it provides theoretical guarantees on undetectability and minimal information retention. MIR employs a mutual-information-based regularization loss, integrated with gradient constraints, data-influence quantification, and layer-wise parameter updates to enable fine-grained knowledge erasure. Evaluated across multiple benchmarks, MIR significantly outperforms existing forgetting methods: it completely eliminates target-data residuals while reducing average performance degradation on general-purpose tasks by 42%.
π Abstract
As AI models are trained on ever-expanding datasets, the ability to remove the influence of specific data from trained models has become essential for privacy protection and regulatory compliance. Unlearning addresses this challenge by selectively removing parametric knowledge from the trained models without retraining from scratch, which is critical for resource-intensive models such as Large Language Models (LLMs). Existing unlearning methods often degrade model performance by removing more information than necessary when attempting to ''forget'' specific data. We introduce Forgetting-MarI, an LLM unlearning framework that provably removes only the additional (marginal) information contributed by the data to be unlearned, while preserving the information supported by the data to be retained. By penalizing marginal information, our method yields an explicit upper bound on the unlearn dataset's residual influence in the trained models, providing provable undetectability. Extensive experiments confirm that our approach outperforms current state-of-the-art unlearning methods, delivering reliable forgetting and better preserved general model performance across diverse benchmarks. This advancement represents an important step toward making AI systems more controllable and compliant with privacy and copyright regulations without compromising their effectiveness.