Less Noise, Same Certificate: Retain Sensitivity for Unlearning

📅 2026-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the excessive noise introduced by existing certified unlearning methods, which naively adopt the global sensitivity from differential privacy and thereby degrade model utility. Recognizing the fundamental objective difference between certified unlearning and differential privacy, we propose a novel notion of “retention sensitivity” that precisely captures the worst-case output change induced by a deletion operation while keeping the retained dataset fixed. Leveraging this refined sensitivity measure, we design a new noise calibration mechanism and provide both theoretical and empirical analyses on tasks including minimum spanning tree weight estimation, principal component analysis (PCA), and empirical risk minimization. Experimental results demonstrate that, under the same certification guarantees, our approach significantly reduces injected noise, enhances model utility, and effectively improves two mainstream certified unlearning algorithms.

Technology Category

Application Category

📝 Abstract
Certified machine unlearning aims to provably remove the influence of a deletion set $U$ from a model trained on a dataset $S$, by producing an unlearned output that is statistically indistinguishable from retraining on the retain set $R:=S\setminus U$. Many existing certified unlearning methods adapt techniques from Differential Privacy (DP) and add noise calibrated to global sensitivity, i.e., the worst-case output change over all adjacent datasets. We show that this DP-style calibration is often overly conservative for unlearning, based on a key observation: certified unlearning, by definition, does not require protecting the privacy of the retained data $R$. Motivated by this distinction, we define retain sensitivity as the worst-case output change over deletions $U$ while keeping $R$ fixed. While insufficient for DP, retain sensitivity is exactly sufficient for unlearning, allowing for the same certificates with less noise. We validate these reductions in noise theoretically and empirically across several problems, including the weight of minimum spanning trees, PCA, and ERM. Finally, we refine the analysis of two widely used certified unlearning algorithms through the lens of retain sensitivity, leveraging the regularity induced by $R$ to further reduce noise and improve utility.
Problem

Research questions and friction points this paper is trying to address.

certified machine unlearning
differential privacy
global sensitivity
retain sensitivity
noise reduction
Innovation

Methods, ideas, or system contributions that make the work stand out.

retain sensitivity
certified machine unlearning
noise reduction
differential privacy
model unlearning
🔎 Similar Papers
No similar papers found.