OFMU: Optimization-Driven Framework for Machine Unlearning

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) must efficiently “forget” specific knowledge—such as user data, copyrighted content, or outdated information—to meet compliance, privacy, and security requirements. Method: This paper proposes a bi-level optimization framework for machine unlearning. Unlike prevailing weighted multi-objective approaches that suffer from gradient conflict and utility degradation, our framework explicitly prioritizes forgetting efficacy via an inner loop that maximizes the forgetting loss and an outer loop that minimizes the retention loss. We further introduce a similarity-aware penalty term and a gradient decorrelation strategy to mitigate objective conflicts during optimization. Theoretical analysis guarantees convergence under both convex and non-convex settings. Results: Extensive experiments on multimodal benchmarks demonstrate that our method achieves superior trade-offs between forgetting accuracy and model utility, outperforming state-of-the-art unlearning approaches.

Technology Category

Application Category

📝 Abstract
Large language models deployed in sensitive applications increasingly require the ability to unlearn specific knowledge, such as user requests, copyrighted materials, or outdated information, without retraining from scratch to ensure regulatory compliance, user privacy, and safety. This task, known as machine unlearning, aims to remove the influence of targeted data (forgetting) while maintaining performance on the remaining data (retention). A common approach is to formulate this as a multi-objective problem and reduce it to a single-objective problem via scalarization, where forgetting and retention losses are combined using a weighted sum. However, this often results in unstable training dynamics and degraded model utility due to conflicting gradient directions. To address these challenges, we propose OFMU, a penalty-based bi-level optimization framework that explicitly prioritizes forgetting while preserving retention through a hierarchical structure. Our method enforces forgetting via an inner maximization step that incorporates a similarity-aware penalty to decorrelate the gradients of the forget and retention objectives, and restores utility through an outer minimization step. To ensure scalability, we develop a two-loop algorithm with provable convergence guarantees under both convex and non-convex regimes. We further provide a rigorous theoretical analysis of convergence rates and show that our approach achieves better trade-offs between forgetting efficacy and model utility compared to prior methods. Extensive experiments across vision and language benchmarks demonstrate that OFMU consistently outperforms existing unlearning methods in both forgetting efficacy and retained utility.
Problem

Research questions and friction points this paper is trying to address.

Optimizing machine unlearning for specific data removal
Resolving conflicting gradient directions in unlearning objectives
Ensuring model utility preservation during forgetting process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Penalty-based bi-level optimization framework
Similarity-aware penalty decorrelates gradient objectives
Two-loop algorithm with provable convergence guarantees
🔎 Similar Papers
No similar papers found.