PrivacyGuard: A Modular Framework for Privacy Auditing in Machine Learning

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing tools lack systematic, empirical evaluation capabilities for differential privacy (DP) guarantees of machine learning models, particularly in privacy-sensitive applications. Method: We propose the first modular, empirically grounded DP auditing framework, integrating state-of-the-art inference attacks—including membership inference, model extraction, and data reconstruction—alongside multiple empirical DP measurement techniques, enabling flexible configuration and scalable privacy analysis. Contribution/Results: Its core innovation is a unified architecture supporting plug-and-play integration of diverse attacks and DP estimators, substantially improving assessment efficiency and experimental reproducibility. The open-source implementation fosters community-driven development and validation. This framework establishes a practical, verifiable technical infrastructure for assessing ML model compliance with DP requirements.

Technology Category

Application Category

📝 Abstract
The increasing deployment of Machine Learning (ML) models in sensitive domains motivates the need for robust, practical privacy assessment tools. PrivacyGuard is a comprehensive tool for empirical differential privacy (DP) analysis, designed to evaluate privacy risks in ML models through state-of-the-art inference attacks and advanced privacy measurement techniques. To this end, PrivacyGuard implements a diverse suite of privacy attack-- including membership inference , extraction, and reconstruction attacks -- enabling both off-the-shelf and highly configurable privacy analyses. Its modular architecture allows for the seamless integration of new attacks, and privacy metrics, supporting rapid adaptation to emerging research advances. We make PrivacyGuard available at https://github.com/facebookresearch/PrivacyGuard.
Problem

Research questions and friction points this paper is trying to address.

Assessing privacy risks in machine learning models
Implementing differential privacy analysis through inference attacks
Providing modular framework for customizable privacy auditing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework for privacy auditing
Implements diverse suite of inference attacks
Supports integration of new privacy metrics
🔎 Similar Papers
No similar papers found.