Label Leakage Attacks in Machine Unlearning: A Parameter and Inversion-Based Approach

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical privacy vulnerability in machine unlearning: the potential leakage of class labels associated with forgotten data. The study systematically uncovers this risk and proposes a novel multi-perspective attack framework that integrates model parameter analysis with model inversion. On the parameter side, it constructs discriminative features using dot products and vector differences, then employs k-means clustering, the Youden index, and decision trees to identify the forgotten class. On the inversion side, it designs a gradient-based white-box attack and a genetic algorithm-driven black-box attack, complemented by thresholding and information entropy criteria to analyze prediction distributions. Experiments across four benchmark datasets demonstrate significant label leakage across five state-of-the-art unlearning methods, revealing a fundamental privacy shortcoming in current approaches.
📝 Abstract
With the widespread application of artificial intelligence technologies in face recognition and other fields, data privacy security issues have received extensive attention, especially the \textit{right to be forgotten} emphasized by numerous privacy protection laws. Existing technologies have proposed various unlearning methods, but they may inadvertently leak the categories of unlearned data. This paper focuses on the category unlearning scenario, analyzes the potential problems of category leakage of unlearned data in multiple scenarios, and proposes four attack methods from the perspectives of model parameters and model inversion based on attackers with different knowledge backgrounds. At the level of model parameters, we construct discriminative features by computing either dot products or vector differences between the parameters of the target model and those of auxiliary models trained on subsets of retained data and unrelated data, respectively. These features are then processed via k-means clustering, Youden's Index, and decision tree algorithms to achieve accurate identification of the forgotten class. In the model inversion domain, we design a gradient optimization-based white-box attack and a genetic algorithm-based black-box attack to reconstruct class-prototypical samples. The prediction profiles of these synthesized samples are subsequently analyzed using a threshold criterion and an information entropy criterion to infer the forgotten class. We evaluate the proposed attacks on four standard datasets against five state-of-the-art unlearning algorithms, providing a detailed analysis of the strengths and limitations of each method. Experimental results demonstrate that our approach can effectively infer the classes forgotten by the target model.
Problem

Research questions and friction points this paper is trying to address.

label leakage
machine unlearning
privacy
right to be forgotten
category inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Label Leakage
Machine Unlearning
Model Inversion
Parameter Analysis
Privacy Attack
🔎 Similar Papers
No similar papers found.
W
Weidong Zheng
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510335, China
Kongyang Chen
Kongyang Chen
The Hong Kong Polytechnic University
AI Security and PrivacyEdge ComputingInternet of ThingsMobile Computing
Yao Huang
Yao Huang
Institute of Artificial Intelligence, Beihang University
Trustworthy MLMultimodal Learning
Y
Yuanwei Guo
Guangzhou Institute of Internet of Things, Guangzhou 511462, China
Y
Yatie Xiao
School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou 510335, China