Efficient Utility-Preserving Machine Unlearning with Implicit Gradient Surgery

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental trade-off between forgetting efficacy and model utility in machine unlearning. We propose Implicit Gradient Surgery (IGS), a novel method that formulates unlearning as a utility-constrained optimization problem. By leveraging implicit differentiation to approximate the gradient of the constrained objective, IGS efficiently identifies the optimal forgetting direction with only a single backward pass. This enables fine-grained, controllable parameter updates that precisely erase sensitive memories while preserving overall model performance. We provide theoretical convergence guarantees for IGS under standard smoothness and convexity assumptions. Empirical evaluation across multiple benchmark datasets demonstrates that IGS significantly outperforms state-of-the-art unlearning methods, achieving superior balance among forgetting accuracy, utility retention, and computational efficiency.

Technology Category

Application Category

📝 Abstract
Machine unlearning (MU) aims to efficiently remove sensitive or harmful memory from a pre-trained model. The key challenge is to balance the potential tradeoff between unlearning efficacy and utility preservation, which involves forgetting undesirable information as defined while maintaining the model's original performance. One potential way to tackle this problem is to use multi-objective optimization to jointly optimize both the unlearning and utility preservation objectives. However, existing multi-objective methods only guarantee finding a Pareto-optimal solution without fine-grained control, which causes under-optimization of the unlearning objective. To this end, we first model MU as a constrained optimization problem, that is, optimizing the unlearning objective under the constraint of a bounded increase for utility loss. We then show that solving this optimization problem is equivalent to unilateral gradient surgery on the unlearning objective. To resolve the additional computational cost brought by gradient surgery, we propose an implicit gradient surgery method, which approximates the solution to the aforementioned constrained optimization problem via only one backpropagation, thereby achieving efficient utility-preserving MU. Theoretically, we provide a tight convergence analysis of the algorithm. Empirically, our extensive experiments show that the proposed algorithm achieves better tradeoff results than existing baselines. Codes are available at https://github.com/anseryuer/EUPMU-Efficient-Utility-Preserving-Machine-Unlearning.
Problem

Research questions and friction points this paper is trying to address.

Balancing unlearning efficacy with utility preservation in machine learning
Resolving under-optimization in multi-objective machine unlearning methods
Achieving efficient constrained optimization via implicit gradient surgery
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constrained optimization for machine unlearning
Implicit gradient surgery via one backpropagation
Balancing unlearning efficacy and utility preservation
🔎 Similar Papers
No similar papers found.
Shiji Zhou
Shiji Zhou
Associate Professor, Beihang University
Online LearningStochastic OptimizationMulti-Objective OptimizationMulti-task Learning
T
Tianbai Yu
University of Illinois at Urbana-Champaign
Z
Zhi Zhang
University of Amsterdam
Heng Chang
Heng Chang
Tsinghua University
Trustworthy AIGraph Representation LearningData Mining
Xiao Zhou
Xiao Zhou
M.Phil student in HKUST
Autonomous DrivingDRL
D
Dong Wu
YanTron Technology Co.Ltd
H
Han Zhao
University of Illinois at Urbana-Champaign