PrivacyBench: Privacy Isn't Free in Hybrid Privacy-Preserving Vision Systems

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of synergistic effects among hybrid privacy-preserving techniques in vision systems, which often leads to performance and resource bottlenecks in real-world deployment. To this end, we propose PrivacyBench—a reproducible, automated benchmarking framework that, for the first time, systematically evaluates the privacy-utility-cost trade-offs of combining federated learning (FL), differential privacy (DP), and secure multi-party computation (SMPC) on medical imaging tasks. Using ResNet18 and Vision Transformer (ViT) models under standardized experimental protocols, we find that the FL+DP combination suffers severe convergence failure, causing accuracy to plummet from 98% to 13% while significantly increasing computational cost and energy consumption. In contrast, FL+SMPC incurs only minor overhead and maintains near-baseline performance. These findings underscore that privacy techniques cannot be arbitrarily combined and offer critical guidance for practical deployment.

Technology Category

Application Category

📝 Abstract
Privacy preserving machine learning deployments in sensitive deep learning applications; from medical imaging to autonomous systems; increasingly require combining multiple techniques. Yet, practitioners lack systematic guidance to assess the synergistic and non-additive interactions of these hybrid configurations, relying instead on isolated technique analysis that misses critical system level interactions. We introduce PrivacyBench, a benchmarking framework that reveals striking failures in privacy technique combinations with severe deployment implications. Through systematic evaluation across ResNet18 and ViT models on medical datasets, we uncover that FL + DP combinations exhibit severe convergence failure, with accuracy dropping from 98% to 13% while compute costs and energy consumption substantially increase. In contrast, FL + SMPC maintains near-baseline performance with modest overhead. Our framework provides the first systematic platform for evaluating privacy-utility-cost trade-offs through automated YAML configuration, resource monitoring, and reproducible experimental protocols. PrivacyBench enables practitioners to identify problematic technique interactions before deployment, moving privacy-preserving computer vision from ad-hoc evaluation toward principled systems design. These findings demonstrate that privacy techniques cannot be composed arbitrarily and provide critical guidance for robust deployment in resource-constrained environments.
Problem

Research questions and friction points this paper is trying to address.

privacy-preserving machine learning
hybrid privacy systems
federated learning
differential privacy
secure multi-party computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

PrivacyBench
hybrid privacy-preserving systems
federated learning
differential privacy
secure multi-party computation
🔎 Similar Papers
No similar papers found.
N
Nnaemeka Obiefuna
ML Collective; Friedrich-Alexander-Universität Erlangen-Nürnberg
S
Samuel Oyeneye
ML Collective
S
Similoluwa Odunaiya
ML Collective
I
Iremide Oyelaja
ML Collective
Steven Kolawole
Steven Kolawole
Carnegie Mellon University
ML Efficiency