A Sample-Level Evaluation and Generative Framework for Model Inversion Attacks

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient sample-level privacy assessment and protection in model inversion (MI) attacks. We propose DDCS (Diversity-and-Distance Composite Score), the first quantitative metric tailored to single-sample reconstruction quality, and empirically reveal that most training samples exhibit robustness against existing MI attacks. To enhance reconstruction generalization—particularly for hard-to-invert samples—we design a transfer-based generative framework incorporating entropy regularization and natural gradient descent. Experiments demonstrate that our method achieves state-of-the-art performance across all three key metrics: DDCS, coverage, and FID. Moreover, it enables unsupervised identification of vulnerable samples, thereby establishing a novel paradigm for fine-grained privacy risk assessment and targeted defense mechanisms.

Technology Category

Application Category

📝 Abstract
Model Inversion (MI) attacks, which reconstruct the training dataset of neural networks, pose significant privacy concerns in machine learning. Recent MI attacks have managed to reconstruct realistic label-level private data, such as the general appearance of a target person from all training images labeled on him. Beyond label-level privacy, in this paper we show sample-level privacy, the private information of a single target sample, is also important but under-explored in the MI literature due to the limitations of existing evaluation metrics. To address this gap, this study introduces a novel metric tailored for training-sample analysis, namely, the Diversity and Distance Composite Score (DDCS), which evaluates the reconstruction fidelity of each training sample by encompassing various MI attack attributes. This, in turn, enhances the precision of sample-level privacy assessments. Leveraging DDCS as a new evaluative lens, we observe that many training samples remain resilient against even the most advanced MI attack. As such, we further propose a transfer learning framework that augments the generative capabilities of MI attackers through the integration of entropy loss and natural gradient descent. Extensive experiments verify the effectiveness of our framework on improving state-of-the-art MI attacks over various metrics including DDCS, coverage and FID. Finally, we demonstrate that DDCS can also be useful for MI defense, by identifying samples susceptible to MI attacks in an unsupervised manner.
Problem

Research questions and friction points this paper is trying to address.

Enhances sample-level privacy evaluation
Proposes novel metric for reconstruction fidelity
Improves model inversion attack effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Diversity and Distance Composite Score
Proposes transfer learning framework
Enhances generative capabilities with entropy loss
🔎 Similar Papers
No similar papers found.
H
Haoyang Li
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
L
Li Bai
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
Qingqing Ye
Qingqing Ye
Assistant Professor, The Hong Kong Polytechnic University
data privacy and securityadversarial machine learning
H
Haibo Hu
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
Y
Yaxin Xiao
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
Huadi Zheng
Huadi Zheng
Unknown affiliation
Voice TechnologyInformation Security
J
Jianliang Xu
Department of Computer Science, Hong Kong Baptist University