🤖 AI Summary
This work addresses the challenge of safely deploying large language models by enabling selective forgetting of harmful or undesirable data while preserving general capabilities. The authors propose an efficient unlearning method that performs a small number of uphill Gauss-Newton steps on the forget set, leveraging K-FAC to approximate the Hessian for low-cost second-order updates. A key innovation lies in mapping output-space constraints from a retain set into weight space, allowing more precise preservation of desired model behavior and enabling reuse of unlearning updates. Evaluated on the WMDP and ToFU benchmarks, the approach substantially suppresses outputs associated with the forget set—approaching the efficacy of full retraining—while inducing significantly less performance degradation on the retain set compared to existing methods.
📝 Abstract
Standard large language model training can create models that produce outputs their trainer deems unacceptable in deployment. The probability of these outputs can be reduced using methods such as LLM unlearning. However, unlearning a set of data (called the forget set) can degrade model performance on other distributions where the trainer wants to retain the model's behavior. To improve this trade-off, we demonstrate that using the forget set to compute only a few uphill Gauss-Newton steps provides a conceptually simple, state-of-the-art unlearning approach for LLMs. While Gauss-Newton steps adapt Newton's method to non-linear models, it is non-trivial to efficiently and accurately compute such steps for LLMs. Hence, our approach crucially relies on parametric Hessian approximations such as Kronecker-Factored Approximate Curvature (K-FAC). We call this combined approach K-FADE (K-FAC for Distribution Erasure). Our evaluation on the WMDP and ToFU benchmarks demonstrates that K-FADE suppresses outputs from the forget set and approximates, in output space, the results of retraining without the forget set. Critically, our method does this while altering the outputs on the retain set less than previous methods. This is because K-FADE transforms a constraint on the model's outputs across the entire retain set into a constraint on the model's weights, allowing the algorithm to minimally change the model's behavior on the retain set at each step. Moreover, the unlearning updates computed by K-FADE can be reapplied later if the model undergoes further training, allowing unlearning to be cheaply maintained.