🤖 AI Summary
This study addresses the longstanding trade-off between privacy preservation and action recognition performance in video surveillance. We propose a penalty-driven image anonymization method based on a dual-branch deep neural network architecture—comprising a privacy branch and a utility branch. Crucially, we introduce, for the first time, a decoupled feature-level penalty mechanism that operates exclusively on action features, enabling separate optimization of private attribute anonymization and action representation stability. Our approach integrates differentiable feature perturbation, gradient-weighted penalty backpropagation, and formal differential privacy constraints to ensure compliance with GDPR and the EU AI Act. Extensive experiments on multiple benchmark datasets demonstrate that our method incurs less than 2.1% degradation in action recognition accuracy while reducing privacy leakage variance to under 0.8%, significantly outperforming existing state-of-the-art methods.
📝 Abstract
The rapid development of video surveillance systems for object detection, tracking, activity recognition, and anomaly detection has revolutionized our day-to-day lives while setting alarms for privacy concerns. It isn't easy to strike a balance between visual privacy and action recognition performance in most computer vision models. Is it possible to safeguard privacy without sacrificing performance? It poses a formidable challenge, as even minor privacy enhancements can lead to substantial performance degradation. To address this challenge, we propose a privacy-preserving image anonymization technique that optimizes the anonymizer using penalties from the utility branch, ensuring improved action recognition performance while minimally affecting privacy leakage. This approach addresses the trade-off between minimizing privacy leakage and maintaining high action performance. The proposed approach is primarily designed to align with the regulatory standards of the EU AI Act and GDPR, ensuring the protection of personally identifiable information while maintaining action performance. To the best of our knowledge, we are the first to introduce a feature-based penalty scheme that exclusively controls the action features, allowing freedom to anonymize private attributes. Extensive experiments were conducted to validate the effectiveness of the proposed method. The results demonstrate that applying a penalty to anonymizer from utility branch enhances action performance while maintaining nearly consistent privacy leakage across different penalty settings.