Adversarial Attacks on Both Face Recognition and Face Anti-spoofing Models

📅 2024-05-27
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited practicality of existing adversarial attacks against integrated face recognition (FR) and face anti-spoofing (FAS) systems under collaborative dual-model defense. We propose Reference-free Multi-level Alignment (RMA), the first black-box, reference-free, multi-level feature alignment framework for coordinated attacks. RMA jointly optimizes adversarial perturbations by integrating adaptive gradient preservation, reference-free intermediate deviation guidance, and multi-level representation-space consistency modeling—enabling synchronized, high-efficiency evasion of both FR and FAS models. Evaluated on multiple mainstream FR/FAS model combinations, RMA achieves up to 37.2% higher attack success rate than state-of-the-art methods, demonstrating its potent real-world threat to deployed integrated biometric systems.

Technology Category

Application Category

📝 Abstract
Adversarial attacks on Face Recognition (FR) systems have demonstrated significant effectiveness against standalone FR models. However, their practicality diminishes in complete FR systems that incorporate Face Anti-Spoofing (FAS) models, as these models can detect and mitigate a substantial number of adversarial examples. To address this critical yet under-explored challenge, we introduce a novel attack setting that targets both FR and FAS models simultaneously, thereby enhancing the practicability of adversarial attacks on integrated FR systems. Specifically, we propose a new attack method, termed Reference-free Multi-level Alignment (RMA), designed to improve the capacity of black-box attacks on both FR and FAS models. The RMA framework is built upon three key components. Firstly, we propose an Adaptive Gradient Maintenance module to address the imbalances in gradient contributions between FR and FAS models. Secondly, we develop a Reference-free Intermediate Biasing module to improve the transferability of adversarial examples against FAS models. In addition, we introduce a Multi-level Feature Alignment module to reduce feature discrepancies at various levels of representation. Extensive experiments showcase the superiority of our proposed attack method to state-of-the-art adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Simultaneously attacking Face Recognition and Anti-spoofing models
Enhancing black-box attack effectiveness on integrated systems
Addressing gradient imbalance and feature discrepancies in attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simultaneously targets FR and FAS models
Uses Adaptive Gradient Maintenance module
Employs Multi-level Feature Alignment module
🔎 Similar Papers
No similar papers found.