🤖 AI Summary
This work addresses the insufficient sample-level privacy assessment and protection in model inversion (MI) attacks. We propose DDCS (Diversity-and-Distance Composite Score), the first quantitative metric tailored to single-sample reconstruction quality, and empirically reveal that most training samples exhibit robustness against existing MI attacks. To enhance reconstruction generalization—particularly for hard-to-invert samples—we design a transfer-based generative framework incorporating entropy regularization and natural gradient descent. Experiments demonstrate that our method achieves state-of-the-art performance across all three key metrics: DDCS, coverage, and FID. Moreover, it enables unsupervised identification of vulnerable samples, thereby establishing a novel paradigm for fine-grained privacy risk assessment and targeted defense mechanisms.
📝 Abstract
Model Inversion (MI) attacks, which reconstruct the training dataset of neural networks, pose significant privacy concerns in machine learning. Recent MI attacks have managed to reconstruct realistic label-level private data, such as the general appearance of a target person from all training images labeled on him. Beyond label-level privacy, in this paper we show sample-level privacy, the private information of a single target sample, is also important but under-explored in the MI literature due to the limitations of existing evaluation metrics. To address this gap, this study introduces a novel metric tailored for training-sample analysis, namely, the Diversity and Distance Composite Score (DDCS), which evaluates the reconstruction fidelity of each training sample by encompassing various MI attack attributes. This, in turn, enhances the precision of sample-level privacy assessments. Leveraging DDCS as a new evaluative lens, we observe that many training samples remain resilient against even the most advanced MI attack. As such, we further propose a transfer learning framework that augments the generative capabilities of MI attackers through the integration of entropy loss and natural gradient descent. Extensive experiments verify the effectiveness of our framework on improving state-of-the-art MI attacks over various metrics including DDCS, coverage and FID. Finally, we demonstrate that DDCS can also be useful for MI defense, by identifying samples susceptible to MI attacks in an unsupervised manner.