OpenUnlearning: Accelerating LLM Unlearning via Unified Benchmarking of Methods and Metrics

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of reliable evaluation standards for large language model (LLM) unlearning, this paper introduces the first standardized, scalable unlearning evaluation framework. It uniformly benchmarks nine state-of-the-art unlearning algorithms across three major benchmarks—TOFU, MUSE, and WMDP—assessing 16 dimensions including safety leakage, functional retention, and multi-dimensional consistency, and releasing over 450 model checkpoints. The framework innovatively incorporates a meta-evaluation benchmark to validate metric credibility and robustness, and integrates techniques such as gradient masking, retraining variants, and influence function approximation. Experiments reveal, for the first time, a fundamental trade-off among unlearning completeness, task fidelity, and distributional robustness, establishing current best practices. All code, checkpoints, and evaluation protocols are open-sourced to advance unlearning research toward a systematic, reproducible scientific paradigm.

Technology Category

Application Category

📝 Abstract
Robust unlearning is crucial for safely deploying large language models (LLMs) in environments where data privacy, model safety, and regulatory compliance must be ensured. Yet the task is inherently challenging, partly due to difficulties in reliably measuring whether unlearning has truly occurred. Moreover, fragmentation in current methodologies and inconsistent evaluation metrics hinder comparative analysis and reproducibility. To unify and accelerate research efforts, we introduce OpenUnlearning, a standardized and extensible framework designed explicitly for benchmarking both LLM unlearning methods and metrics. OpenUnlearning integrates 9 unlearning algorithms and 16 diverse evaluations across 3 leading benchmarks (TOFU, MUSE, and WMDP) and also enables analyses of forgetting behaviors across 450+ checkpoints we publicly release. Leveraging OpenUnlearning, we propose a novel meta-evaluation benchmark focused specifically on assessing the faithfulness and robustness of evaluation metrics themselves. We also benchmark diverse unlearning methods and provide a comparative analysis against an extensive evaluation suite. Overall, we establish a clear, community-driven pathway toward rigorous development in LLM unlearning research.
Problem

Research questions and friction points this paper is trying to address.

Ensuring robust unlearning for LLM data privacy and safety
Standardizing fragmented unlearning methods and evaluation metrics
Assessing faithfulness of unlearning metrics via meta-evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Standardized framework for LLM unlearning benchmarking
Integrates 9 algorithms and 16 diverse evaluations
Novel meta-evaluation benchmark for metric robustness
🔎 Similar Papers
No similar papers found.