🤖 AI Summary
Existing deep palmprint recognition systems lack robustness against physical adversarial attacks and often overlook the dominance of palmprint texture patterns and acquisition-induced deformations. To address these limitations, this work proposes the CAAP framework, which introduces a capture-aware mechanism for the first time. CAAP employs a reusable cross-shaped universal adversarial patch and integrates three key components: an ASIT module for input-conditioned rendering, a RaS module to simulate random acquisition perturbations, and an MS-DIFE module to impose multi-scale feature-level identity interference—collectively disrupting the long-range continuity of palmprint textures. Evaluated on the Tongji, IITD, and AISEC datasets, CAAP achieves high success rates in both untargeted and targeted attacks and demonstrates strong cross-model and cross-dataset transferability, exposing significant vulnerabilities even in models hardened by adversarial training.
📝 Abstract
Palmprint recognition is deployed in security-critical applications, including access control and palm-based payment, due to its contactless acquisition and highly discriminative ridge-and-crease textures. However, the robustness of deep palmprint recognition systems against physically realizable attacks remains insufficiently understood. Existing studies are largely confined to the digital setting and do not adequately account for the texture-dominant nature of palmprint recognition or the distortions introduced during physical acquisition. To address this gap, we propose CAAP, a capture-aware adversarial patch framework for palmprint recognition. CAAP learns a universal patch that can be reused across inputs while remaining effective under realistic acquisition variation. To match the structural characteristics of palmprints, the framework adopts a cross-shaped patch topology, which enlarges spatial coverage under a fixed pixel budget and more effectively disrupts long-range texture continuity. CAAP further integrates three modules: ASIT for input-conditioned patch rendering, RaS for stochastic capture-aware simulation, and MS-DIFE for feature-level identity-disruptive guidance. We evaluate CAAP on the Tongji, IITD, and AISEC datasets against generic CNN backbones and palmprint-specific recognition models. Experiments show that CAAP achieves strong untargeted and targeted attack performance with favorable cross-model and cross-dataset transferability. The results further show that, although adversarial training can partially reduce the attack success rate, substantial residual vulnerability remains. These findings indicate that deep palmprint recognition systems remain vulnerable to physically realizable, capture-aware adversarial patch attacks, underscoring the need for more effective defenses in practice. Code available at https://github.com/ryliu68/CAAP.