π€ AI Summary
Near-infrared (NIR) face recognition offers robustness against low-light conditions and makeup but remains vulnerable to physical adversarial attacks. This work addresses the black-box setting and proposes, for the first time, a digital-to-physical consistent optimization framework grounded in a physics-based model of skinβs NIR reflectance. We design imperceptible, pose-robust, and cross-model generalizable physical adversarial patches printed with infrared-absorbing ink, jointly optimizing patch shape and occlusion location. Experiments demonstrate an average physical-domain attack success rate of 82.46%, substantially outperforming the state-of-the-art (64.18%). The method maintains strong efficacy across varying poses and diverse NIR recognition models. These results expose critical security vulnerabilities in real-world deployments of NIR face recognition systems.
π Abstract
Near-infrared (NIR) face recognition systems, which can operate effectively in low-light conditions or in the presence of makeup, exhibit vulnerabilities when subjected to physical adversarial attacks. To further demonstrate the potential risks in real-world applications, we design a novel, stealthy, and practical adversarial patch to attack NIR face recognition systems in a black-box setting. We achieved this by utilizing human-imperceptible infrared-absorbing ink to generate multiple patches with digitally optimized shapes and positions for infrared images. To address the optimization mismatch between digital and real-world NIR imaging, we develop a light reflection model for human skin to minimize pixel-level discrepancies by simulating NIR light reflection. Compared to state-of-the-art (SOTA) physical attacks on NIR face recognition systems, the experimental results show that our method improves the attack success rate in both digital and physical domains, particularly maintaining effectiveness across various face postures. Notably, the proposed approach outperforms SOTA methods, achieving an average attack success rate of 82.46% in the physical domain across different models, compared to 64.18% for existing methods. The artifact is available at https://anonymous.4open.science/r/Human-imperceptible-adversarial-patch-0703/.