Privacy Preserving Properties of Vision Classifiers

📅 2025-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the privacy robustness of vision classifiers—including MLPs, CNNs, and Vision Transformers (ViTs)—against gradient inversion attacks (i.e., training data reconstruction) in model-sharing scenarios. We propose the first cross-architecture privacy vulnerability benchmarking framework, integrating gradient-based inversion reconstruction, multi-model risk quantification, and sensitive information leakage intensity measurement. Our analysis identifies input representation, feature extraction mechanisms, and weight structure as key determinants of inversion robustness. Empirical results demonstrate that ViTs significantly outperform CNNs and MLPs, achieving up to 2.3× higher reconstruction fidelity resistance. Crucially, we establish the first performance–privacy trade-off benchmark, empirically grounding privacy-aware model selection and architectural design. The findings provide actionable insights for deploying vision models under privacy-sensitive constraints.

Technology Category

Application Category

📝 Abstract
Vision classifiers are often trained on proprietary datasets containing sensitive information, yet the models themselves are frequently shared openly under the privacy-preserving assumption. Although these models are assumed to protect sensitive information in their training data, the extent to which this assumption holds for different architectures remains unexplored. This assumption is challenged by inversion attacks which attempt to reconstruct training data from model weights, exposing significant privacy vulnerabilities. In this study, we systematically evaluate the privacy-preserving properties of vision classifiers across diverse architectures, including Multi-Layer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Vision Transformers (ViTs). Using network inversion-based reconstruction techniques, we assess the extent to which these architectures memorize and reveal training data, quantifying the relative ease of reconstruction across models. Our analysis highlights how architectural differences, such as input representation, feature extraction mechanisms, and weight structures, influence privacy risks. By comparing these architectures, we identify which are more resilient to inversion attacks and examine the trade-offs between model performance and privacy preservation, contributing to the development of secure and privacy-respecting machine learning models for sensitive applications. Our findings provide actionable insights into the design of secure and privacy-aware machine learning systems, emphasizing the importance of evaluating architectural decisions in sensitive applications involving proprietary or personal data.
Problem

Research questions and friction points this paper is trying to address.

Visual Classifier Security
Privacy Protection
Model Inversion Attack
Innovation

Methods, ideas, or system contributions that make the work stand out.

Privacy Preservation
Visual Classification Models
Security-Performance Trade-off
🔎 Similar Papers
No similar papers found.