State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey

📅 2024-02-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine learning systems face significant privacy risks—including membership inference and attribute reconstruction—arising from training data leakage. Method: This paper presents a systematic survey of state-of-the-art privacy-preserving machine learning (PPML) techniques, covering both centralized and collaborative learning settings. It introduces a unified, multi-dimensional threat model and defense-layer mapping framework; proposes a quantitative evaluation framework balancing privacy guarantees and model utility; and clarifies the applicability boundaries and trade-offs among differential privacy, secure multi-party computation, homomorphic encryption, trusted execution environments (TEEs), and federated learning. Contribution/Results: Based on analysis of over 120 studies, the work establishes a comprehensive PPML taxonomy and provides quantitative comparisons across communication overhead, accuracy degradation, and security strength. The findings yield a practical, industry-deployable roadmap for privacy hardening of ML systems.

Technology Category

Application Category

📝 Abstract
This paper examines the evolving landscape of machine learning (ML) and its profound impact across various sectors, with a special focus on the emerging field of Privacy-preserving Machine Learning (PPML). As ML applications become increasingly integral to industries like telecommunications, financial technology, and surveillance, they raise significant privacy concerns, necessitating the development of PPML strategies. The paper highlights the unique challenges in safeguarding privacy within ML frameworks, which stem from the diverse capabilities of potential adversaries, including their ability to infer sensitive information from model outputs or training data. We delve into the spectrum of threat models that characterize adversarial intentions, ranging from membership and attribute inference to data reconstruction. The paper emphasizes the importance of maintaining the confidentiality and integrity of training data, outlining current research efforts that focus on refining training data to minimize privacy-sensitive information and enhancing data processing techniques to uphold privacy. Through a comprehensive analysis of privacy leakage risks and countermeasures in both centralized and collaborative learning settings, this paper aims to provide a thorough understanding of effective strategies for protecting ML training data against privacy intrusions. It explores the balance between data privacy and model utility, shedding light on privacy-preserving techniques that leverage cryptographic methods, Differential Privacy, and Trusted Execution Environments. The discussion extends to the application of these techniques in sensitive domains, underscoring the critical role of PPML in ensuring the privacy and security of ML systems.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning
Data Privacy
Information Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy-Preserving Machine Learning
Differential Privacy
Collaborative Learning Environments
🔎 Similar Papers
No similar papers found.
C
Chaoyu Zhang