BalDRO: A Distributionally Robust Optimization based Framework for Large Language Model Unlearning

📅 2026-01-14
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of uneven sample-level difficulty in the unlearning process of large language models, which often leads to insufficient removal of targeted knowledge or excessive forgetting of unrelated information. To tackle this issue, the paper introduces distributionally robust optimization (DRO) into unlearning for the first time, proposing a min–max optimization framework: the inner loop constructs the worst-case distribution over the hardest-to-forget samples, while the outer loop updates model parameters under this distribution to achieve balanced forgetting. Two efficient variants are developed—BalDRO-G, based on a discrete approximation from GroupDRO, and BalDRO-DV, leveraging continuous weighting via the Donsker–Varadhan dual formulation. Experiments on the TOFU and MUSE benchmarks demonstrate that the proposed approach significantly outperforms existing methods, achieving a superior trade-off between effective unlearning and preservation of model utility.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) increasingly shape online content, removing targeted information from well-trained LLMs (also known as LLM unlearning) has become critical for web governance. A key challenge lies in sample-wise imbalance within the forget set: different samples exhibit widely varying unlearning difficulty, leading to asynchronous forgetting where some knowledge remains insufficiently erased while others become over-forgotten. To address this, we propose BalDRO, a novel and efficient framework for balanced LLM unlearning. BalDRO formulates unlearning as a min-sup process: an inner step identifies a worst-case data distribution that emphasizes hard-to-unlearn samples, while an outer step updates model parameters under this distribution. We instantiate BalDRO via two efficient variants: BalDRO-G, a discrete GroupDRO-based approximation focusing on high-loss subsets, and BalDRO-DV, a continuous Donsker-Varadhan dual method enabling smooth adaptive weighting within standard training pipelines. Experiments on TOFU and MUSE show that BalDRO significantly improves both forgetting quality and model utility over existing methods, and we release code for reproducibility.
Problem

Research questions and friction points this paper is trying to address.

LLM unlearning
sample-wise imbalance
asynchronous forgetting
forget set
unlearning difficulty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributionally Robust Optimization
LLM Unlearning
Min-Sup Optimization
Adaptive Weighting
GroupDRO
🔎 Similar Papers
No similar papers found.
Pengyang Shao
Pengyang Shao
Hefei University of Technology
Recommender SystemsCognitive Diagnosis
N
Na Zhai
University of Science and Technology of China, Hefei, Anhui, China
L
Lei Chen
University of Science and Technology of China, Hefei, Anhui, China
Yonghui Yang
Yonghui Yang
National University of Singapore
Data-centric AILLM Safety
Fengbin Zhu
Fengbin Zhu
National University of Singapore
NLPIRLLMDocument AIAI + Finance
X
Xun Yang
University of Science and Technology of China, Hefei, Anhui, China
M
Meng Wang
Hefei University of Technology, Hefei, Anhui, China