🤖 AI Summary
This work addresses the challenge of uneven sample-level difficulty in the unlearning process of large language models, which often leads to insufficient removal of targeted knowledge or excessive forgetting of unrelated information. To tackle this issue, the paper introduces distributionally robust optimization (DRO) into unlearning for the first time, proposing a min–max optimization framework: the inner loop constructs the worst-case distribution over the hardest-to-forget samples, while the outer loop updates model parameters under this distribution to achieve balanced forgetting. Two efficient variants are developed—BalDRO-G, based on a discrete approximation from GroupDRO, and BalDRO-DV, leveraging continuous weighting via the Donsker–Varadhan dual formulation. Experiments on the TOFU and MUSE benchmarks demonstrate that the proposed approach significantly outperforms existing methods, achieving a superior trade-off between effective unlearning and preservation of model utility.
📝 Abstract
As Large Language Models (LLMs) increasingly shape online content, removing targeted information from well-trained LLMs (also known as LLM unlearning) has become critical for web governance. A key challenge lies in sample-wise imbalance within the forget set: different samples exhibit widely varying unlearning difficulty, leading to asynchronous forgetting where some knowledge remains insufficiently erased while others become over-forgotten. To address this, we propose BalDRO, a novel and efficient framework for balanced LLM unlearning. BalDRO formulates unlearning as a min-sup process: an inner step identifies a worst-case data distribution that emphasizes hard-to-unlearn samples, while an outer step updates model parameters under this distribution. We instantiate BalDRO via two efficient variants: BalDRO-G, a discrete GroupDRO-based approximation focusing on high-loss subsets, and BalDRO-DV, a continuous Donsker-Varadhan dual method enabling smooth adaptive weighting within standard training pipelines. Experiments on TOFU and MUSE show that BalDRO significantly improves both forgetting quality and model utility over existing methods, and we release code for reproducibility.