🤖 AI Summary
Deep learning models are vulnerable to model inversion (MI) attacks, posing severe privacy risks in high-stakes domains such as biometrics, healthcare, and finance. To address this, we systematically analyze the mechanisms and threat models of MI attacks and propose the first structured taxonomy for MI attacks—encompassing both attack and defense techniques within a unified framework. We design an empirical evaluation framework integrating generative modeling, gradient analysis, meta-learning, and differential privacy, complemented by ethical impact assessment. Furthermore, we release an open-source AI privacy security research platform that unifies scholarly publications, benchmark datasets, and standardized evaluation metrics. Our work constitutes the field’s first authoritative survey, directly enabling the development of diverse defense strategies; the associated resources have been widely adopted and cited, significantly advancing standardization efforts in AI privacy security.
📝 Abstract
The rapid adoption of deep learning in sensitive domains has brought tremendous benefits. However, this widespread adoption has also given rise to serious vulnerabilities, particularly model inversion (MI) attacks, posing a significant threat to the privacy and integrity of personal data. The increasing prevalence of these attacks in applications such as biometrics, healthcare, and finance has created an urgent need to understand their mechanisms, impacts, and defense methods. This survey aims to fill the gap in the literature by providing a structured and in-depth review of MI attacks and defense strategies. Our contributions include a systematic taxonomy of MI attacks, extensive research on attack techniques and defense mechanisms, and a discussion about the challenges and future research directions in this evolving field. By exploring the technical and ethical implications of MI attacks, this survey aims to offer insights into the impact of AI-powered systems on privacy, security, and trust. In conjunction with this survey, we have developed a comprehensive repository to support research on MI attacks and defenses. The repository includes state-of-the-art research papers, datasets, evaluation metrics, and other resources to meet the needs of both novice and experienced researchers interested in MI attacks and defenses, as well as the broader field of AI security and privacy. The repository will be continuously maintained to ensure its relevance and utility. It is accessible at https://github.com/overgter/Deep-Learning-Model-Inversion-Attacks-and-Defenses.