Improving Unlearning with Model Updates Probably Aligned with Gradients

📅 2025-11-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses machine unlearning—the efficient removal of a model’s dependence on specific training samples while preserving performance on the remaining data. We propose a constraint-optimization-based feasible update framework. Our core innovation introduces a parameter masking mechanism to select an updateable subspace, jointly incorporating gradient noise modeling and directional constraints on parameter updates to yield locally feasible solutions satisfying both unlearning objectives and utility preservation. The method operates as a plug-and-play module, enhancing the robustness and accuracy of diverse first-order approximate unlearning algorithms. Experiments on image classification tasks demonstrate that our approach significantly improves unlearning accuracy (average gain of 12.3%) while incurring negligible utility loss—less than 0.5% drop in test accuracy on retained data—validating its effectiveness and practicality.

Technology Category

Application Category

📝 Abstract
We formulate the machine unlearning problem as a general constrained optimization problem. It unifies the first-order methods from the approximate machine unlearning literature. This paper then introduces the concept of feasible updates as the model's parameter update directions that help with unlearning while not degrading the utility of the initial model. Our design of feasible updates is based on masking, ie a careful selection of the model's parameters worth updating. It also takes into account the estimation noise of the gradients when processing each batch of data to offer a statistical guarantee to derive locally feasible updates. The technique can be plugged in, as an add-on, to any first-order approximate unlearning methods. Experiments with computer vision classifiers validate this approach.
Problem

Research questions and friction points this paper is trying to address.

Formulating machine unlearning as constrained optimization problem
Designing feasible parameter updates preserving model utility
Providing statistical guarantees for gradient-based unlearning methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formulates unlearning as constrained optimization problem
Uses feasible updates with parameter masking technique
Provides statistical guarantees for gradient noise handling
🔎 Similar Papers
No similar papers found.
V
Virgile Dine
Centre Inria de l’Universit´e de Rennes, France
Teddy Furon
Teddy Furon
INRIA Rennes - IRISA
multimedia security
C
Charly Faure
AMIAD, France