🤖 AI Summary
Existing tools lack systematic, empirical evaluation capabilities for differential privacy (DP) guarantees of machine learning models, particularly in privacy-sensitive applications. Method: We propose the first modular, empirically grounded DP auditing framework, integrating state-of-the-art inference attacks—including membership inference, model extraction, and data reconstruction—alongside multiple empirical DP measurement techniques, enabling flexible configuration and scalable privacy analysis. Contribution/Results: Its core innovation is a unified architecture supporting plug-and-play integration of diverse attacks and DP estimators, substantially improving assessment efficiency and experimental reproducibility. The open-source implementation fosters community-driven development and validation. This framework establishes a practical, verifiable technical infrastructure for assessing ML model compliance with DP requirements.
📝 Abstract
The increasing deployment of Machine Learning (ML) models in sensitive domains motivates the need for robust, practical privacy assessment tools. PrivacyGuard is a comprehensive tool for empirical differential privacy (DP) analysis, designed to evaluate privacy risks in ML models through state-of-the-art inference attacks and advanced privacy measurement techniques. To this end, PrivacyGuard implements a diverse suite of privacy attack-- including membership inference , extraction, and reconstruction attacks -- enabling both off-the-shelf and highly configurable privacy analyses. Its modular architecture allows for the seamless integration of new attacks, and privacy metrics, supporting rapid adaptation to emerging research advances. We make PrivacyGuard available at https://github.com/facebookresearch/PrivacyGuard.