Diffusion-Driven Deceptive Patches: Adversarial Manipulation and Forensic Detection in Facial Identity Verification

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of facial biometric systems to adversarial attacks by proposing a highly realistic and interpretable adversarial patch generation method. By integrating diffusion models with the Fast Gradient Sign Method (FGSM), the approach simultaneously optimizes adversarial perturbations while performing luminance correction and Gaussian smoothing, achieving high visual fidelity (SSIM of 0.95). The framework innovatively incorporates a ViT-GPT2 architecture to translate identity information into semantic descriptions, thereby enhancing forensic interpretability. Furthermore, it systematically evaluates the robustness of recognition models under adversarial conditions through a fusion of perceptual hashing and image segmentation techniques. This end-to-end pipeline balances high attack efficacy with strong interpretability, offering a novel direction for improving the security of biometric authentication systems.

Technology Category

Application Category

📝 Abstract
This work presents an end-to-end pipeline for generating, refining, and evaluating adversarial patches to compromise facial biometric systems, with applications in forensic analysis and security testing. We utilize FGSM to generate adversarial noise targeting an identity classifier and employ a diffusion model with reverse diffusion to enhance imperceptibility through Gaussian smoothing and adaptive brightness correction, thereby facilitating synthetic adversarial patch evasion. The refined patch is applied to facial images to test its ability to evade recognition systems while maintaining natural visual characteristics. A Vision Transformer (ViT)-GPT2 model generates captions to provide a semantic description of a person's identity for adversarial images, supporting forensic interpretation and documentation for identity evasion and recognition attacks. The pipeline evaluates changes in identity classification, captioning results, and vulnerabilities in facial identity verification and expression recognition under adversarial conditions. We further demonstrate effective detection and analysis of adversarial patches and adversarial samples using perceptual hashing and segmentation, achieving an SSIM of 0.95.
Problem

Research questions and friction points this paper is trying to address.

adversarial patches
facial identity verification
forensic detection
identity evasion
biometric security
Innovation

Methods, ideas, or system contributions that make the work stand out.

diffusion model
adversarial patch
Vision Transformer
perceptual hashing
facial identity verification
🔎 Similar Papers
No similar papers found.
S
Shahrzad Sayyafzadeh
Electrical & Computer Engineering, FAMU-FSU College of Engineering, Pottsdamer St, Tallahassee, 32310, Florida, USA
Hongmei Chi
Hongmei Chi
Florida A&M University
Data ScienceHPCEHR Privacy and Applied Security
S
Shonda Bernadin
Electrical & Computer Engineering, FAMU-FSU College of Engineering, Pottsdamer St, Tallahassee, 32310, Florida, USA