Unlearning Evaluation through Subset Statistical Independence

๐Ÿ“… 2026-02-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing machine unlearning evaluation methods rely on model retraining or membership inference attacks, requiring access to training configurations or labels, which limits their applicability in real-world scenarios. This work proposes a lightweight, self-contained, subset-level unlearning evaluation framework based on statistical independence. It introduces the Hilbertโ€“Schmidt Independence Criterion (HSIC) into unlearning assessment for the first time, measuring the statistical dependence between model outputs and the forgotten data subset without the need for retraining, auxiliary models, or prior information. Experimental results demonstrate that the proposed method effectively distinguishes between in-training and out-of-training subsets and accurately reflects the performance of various unlearning algorithms, significantly outperforming existing evaluation approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
Evaluating machine unlearning remains challenging, as existing methods typically require retraining reference models or performing membership inference attacks, both of which rely on prior access to training configuration or supervision labels, making them impractical in realistic scenarios. Motivated by the fact that most unlearning algorithms remove a small, random subset of the training data, we propose a subset-level evaluation framework based on statistical independence. Specifically, we design a tailored use of the Hilbert-Schmidt Independence Criterion to assess whether the model outputs on a given subset exhibit statistical dependence, without requiring model retraining or auxiliary classifiers. Our method provides a simple, standalone evaluation procedure that aligns with unlearning workflows. Extensive experiments demonstrate that our approach reliably distinguishes in-training from out-of-training subsets and clearly differentiates unlearning effectiveness, even when existing evaluations fall short.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
unlearning evaluation
statistical independence
membership inference
model retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

machine unlearning
statistical independence
Hilbert-Schmidt Independence Criterion
subset evaluation
model retraining-free
๐Ÿ”Ž Similar Papers
No similar papers found.