Device-aware Optical Adversarial Attack for a Portable Projector-camera System

📅 2025-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor digital-to-physical transfer robustness of optical adversarial attacks on projector-camera systems in physical-domain deployment, this paper proposes a device-aware optical adversarial attack method. We introduce, for the first time, resolution-aware and color-aware device adaptation mechanisms, integrating optical imaging modeling, device-specific calibration, adversarial perturbation optimization, and physical closed-loop validation. The approach significantly mitigates performance degradation of digital-domain perturbations when physically projected, supporting both white-box and black-box settings. It achieves high physical evasion success rates against mainstream face recognition (FR) models and commercial systems: average facial similarity decreases by only 14%, and the attack remains effective against real faces, liveness, and photographs. This substantially enhances the practicality and generalizability of optical adversarial attacks on portable devices.

Technology Category

Application Category

📝 Abstract
Deep-learning-based face recognition (FR) systems are susceptible to adversarial examples in both digital and physical domains. Physical attacks present a greater threat to deployed systems as adversaries can easily access the input channel, allowing them to provide malicious inputs to impersonate a victim. This paper addresses the limitations of existing projector-camera-based adversarial light attacks in practical FR setups. By incorporating device-aware adaptations into the digital attack algorithm, such as resolution-aware and color-aware adjustments, we mitigate the degradation from digital to physical domains. Experimental validation showcases the efficacy of our proposed algorithm against real and spoof adversaries, achieving high physical similarity scores in FR models and state-of-the-art commercial systems. On average, there is only a 14% reduction in scores from digital to physical attacks, with high attack success rate in both white- and black-box scenarios.
Problem

Research questions and friction points this paper is trying to address.

Facial Recognition Deception
Depth Learning Systems
Light Attack Methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improved Deception Method
Image Clarity Optimization
Color Authenticity Enhancement
🔎 Similar Papers
No similar papers found.
N
Ning Jiang
School of Software & Microelectronics, Peking University, Beijing, China
Yanhong Liu
Yanhong Liu
ERRC-ARS-USDA
Microbiologyfoodborne pathogen detectionstress responses
D
Dingheng Zeng
Mashang Consumer Finance Co., Ltd., Chongqing, China
Y
Yue Feng
Mashang Consumer Finance Co., Ltd., Chongqing, China
Weihong Deng
Weihong Deng
Professor, Beijing University of Posts and Telecommunications
Multimodal LearningTrustworthy AIAffective computingBiometrics
Y
Ying Li
School of Software & Microelectronics, Peking University, Beijing, China