On Conformal Machine Unlearning

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the core challenge in machine unlearning: how to provide statistically verifiable data removal guarantees without retraining the model. To this end, we propose a novel unlearning framework grounded in conformal prediction. Our method is the first to integrate conformal prediction’s uncertainty quantification with machine unlearning, enabling formally verifiable unlearning outcomes. We introduce two new metrics—Exclusion Coverage Frequency (ECF) and Uncertainty-calibrated Exclusion Frequency (EuCF)—to rigorously assess and optimize unlearning performance without requiring retraining of the base model. Extensive experiments across diverse models and datasets demonstrate that our approach significantly improves unlearning effectiveness while preserving predictive accuracy on retained data. The framework is robust, scalable, and computationally efficient, offering a practical yet statistically sound solution for certified data removal.

Technology Category

Application Category

📝 Abstract
The increasing demand for data privacy, driven by regulations such as GDPR and CCPA, has made Machine Unlearning (MU) essential for removing the influence of specific training samples from machine learning models while preserving performance on retained data. However, most existing MU methods lack rigorous statistical guarantees, rely on heuristic metrics, and often require computationally expensive retraining baselines. To overcome these limitations, we introduce a new definition for MU based on Conformal Prediction (CP), providing statistically sound, uncertainty-aware guarantees without the need for the concept of naive retraining. We formalize conformal criteria that quantify how often forgotten samples are excluded from CP sets, and propose empirical metrics,the Efficiently Covered Frequency (ECF at c) and its complement, the Efficiently Uncovered Frequency (EuCF at d), to measure the effectiveness of unlearning. We further present a practical unlearning method designed to optimize these conformal metrics. Extensive experiments across diverse forgetting scenarios, datasets and models demonstrate the efficacy of our approach in removing targeted data.
Problem

Research questions and friction points this paper is trying to address.

Ensuring data privacy compliance in machine learning models
Providing statistical guarantees for machine unlearning methods
Avoiding computationally expensive retraining in unlearning processes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conformal Prediction for Machine Unlearning
Efficiently Covered Frequency metrics
Optimized conformal unlearning method
🔎 Similar Papers
No similar papers found.
Y
Yahya Alkhatib
School of Electrical and Electronics Engineering, Nanyang Technological University
Wee Peng Tay
Wee Peng Tay
Nanyang Technological University
information processinggraph signal processinggraph neural networksrobust machine learning