Certified Unlearning for Neural Networks

📅 2025-06-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the privacy-compliance requirement of the “right to be forgotten” by studying *certified unlearning*—the verifiable removal of a specific training sample’s influence from a machine learning model upon user request. To overcome limitations of existing approaches—such as reliance on strong assumptions or lack of formal guarantees—we establish, for the first time, a theoretical connection between unlearning operations and privacy amplification under randomized post-processing. We propose the first certified unlearning framework applicable to general neural networks without assumptions on the loss function. Our method employs noisy fine-tuning as a randomized post-processing step, leveraging differential privacy analysis and generalization error bounds to derive rigorous, provable unlearning guarantees. Evaluated on multiple benchmarks, our approach achieves certified unlearning while significantly outperforming existing baselines in model accuracy—demonstrating both theoretical soundness and practical efficacy.

Technology Category

Application Category

📝 Abstract
We address the problem of machine unlearning, where the goal is to remove the influence of specific training data from a model upon request, motivated by privacy concerns and regulatory requirements such as the"right to be forgotten."Unfortunately, existing methods rely on restrictive assumptions or lack formal guarantees. To this end, we propose a novel method for certified machine unlearning, leveraging the connection between unlearning and privacy amplification by stochastic post-processing. Our method uses noisy fine-tuning on the retain data, i.e., data that does not need to be removed, to ensure provable unlearning guarantees. This approach requires no assumptions about the underlying loss function, making it broadly applicable across diverse settings. We analyze the theoretical trade-offs in efficiency and accuracy and demonstrate empirically that our method not only achieves formal unlearning guarantees but also performs effectively in practice, outperforming existing baselines. Our code is available at https://github.com/stair-lab/certified-unlearningneural-networks-icml-2025
Problem

Research questions and friction points this paper is trying to address.

Removing specific training data influence from neural networks
Providing formal guarantees for machine unlearning methods
Ensuring privacy compliance without restrictive assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Certified unlearning via noisy fine-tuning
Privacy amplification by stochastic post-processing
No assumptions on loss function required
🔎 Similar Papers
2024-08-01International Conference on Machine LearningCitations: 6
2024-10-02arXiv.orgCitations: 2