🤖 AI Summary
To address the poor digital-to-physical transfer robustness of optical adversarial attacks on projector-camera systems in physical-domain deployment, this paper proposes a device-aware optical adversarial attack method. We introduce, for the first time, resolution-aware and color-aware device adaptation mechanisms, integrating optical imaging modeling, device-specific calibration, adversarial perturbation optimization, and physical closed-loop validation. The approach significantly mitigates performance degradation of digital-domain perturbations when physically projected, supporting both white-box and black-box settings. It achieves high physical evasion success rates against mainstream face recognition (FR) models and commercial systems: average facial similarity decreases by only 14%, and the attack remains effective against real faces, liveness, and photographs. This substantially enhances the practicality and generalizability of optical adversarial attacks on portable devices.
📝 Abstract
Deep-learning-based face recognition (FR) systems are susceptible to adversarial examples in both digital and physical domains. Physical attacks present a greater threat to deployed systems as adversaries can easily access the input channel, allowing them to provide malicious inputs to impersonate a victim. This paper addresses the limitations of existing projector-camera-based adversarial light attacks in practical FR setups. By incorporating device-aware adaptations into the digital attack algorithm, such as resolution-aware and color-aware adjustments, we mitigate the degradation from digital to physical domains. Experimental validation showcases the efficacy of our proposed algorithm against real and spoof adversaries, achieving high physical similarity scores in FR models and state-of-the-art commercial systems. On average, there is only a 14% reduction in scores from digital to physical attacks, with high attack success rate in both white- and black-box scenarios.