Real-World Transferable Adversarial Attack on Face-Recognition Systems

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial attacks in face recognition predominantly rely on digital-domain perturbations or white-box access, rendering them ineffective for generating physically realizable, transferable universal adversarial patches under strict black-box constraints. Method: We propose GaP (Gaussian Patch), the first efficient black-box attack that requires only cosine similarity feedback—without gradient information or knowledge of model architecture. GaP employs zeroth-order greedy optimization to iteratively place symmetric grayscale Gaussian blobs on the forehead region, yielding a lightweight, printable physical patch. Contribution/Results: Within approximately 10,000 queries, GaP achieves high attack success rates both digitally and in real-world physical settings. It demonstrates strong cross-model transferability, effectively evading unseen FaceNet and ArcFace systems. GaP significantly advances the practicality and generality of black-box physical adversarial attacks against face recognition.

Technology Category

Application Category

📝 Abstract
Adversarial attacks on face recognition (FR) systems pose a significant security threat, yet most are confined to the digital domain or require white-box access. We introduce GaP (Gaussian Patch), a novel method to generate a universal, physically transferable adversarial patch under a strict black-box setting. Our approach uses a query-efficient, zero-order greedy algorithm to iteratively construct a symmetric, grayscale pattern for the forehead. The patch is optimized by successively adding Gaussian blobs, guided only by the cosine similarity scores from a surrogate FR model to maximally degrade identity recognition. We demonstrate that with approximately 10,000 queries to a black-box ArcFace model, the resulting GaP achieves a high attack success rate in both digital and real-world physical tests. Critically, the attack shows strong transferability, successfully deceiving an entirely unseen FaceNet model. Our work highlights a practical and severe vulnerability, proving that robust, transferable attacks can be crafted with limited knowledge of the target system.
Problem

Research questions and friction points this paper is trying to address.

Generating universal adversarial patches for face recognition systems
Achieving physical transferability under strict black-box conditions
Demonstrating attack effectiveness on unseen models with limited queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates universal adversarial patch using Gaussian blobs
Optimizes grayscale forehead pattern via query-efficient algorithm
Achieves transferable attack under strict black-box setting
🔎 Similar Papers
No similar papers found.