🤖 AI Summary
This work addresses backdoor attacks against deep learning-based face detectors in unconstrained scenarios. We propose two novel attack paradigms specifically targeting coordinate regression tasks: Face Generation Attack and Landmark Shift Attack—the latter being the first backdoor framework designed exclusively for facial landmark regression. By embedding stealthy triggers into both bounding-box localization and landmark coordinate prediction modules, the attacks significantly degrade detection accuracy on poisoned inputs while preserving clean-input performance. Extensive experiments on mainstream models—including RetinaFace and MTCNN—demonstrate high attack stealthiness and effectiveness. Furthermore, we design a targeted defense mechanism that substantially improves model robustness against such coordinate-regression backdoors without compromising generalization. Our study exposes critical security vulnerabilities in coordinate-regression visual models and provides new insights for backdoor defense in detection-oriented architectures.
📝 Abstract
Face Recognition Systems that operate in unconstrained environments capture images under varying conditions,such as inconsistent lighting, or diverse face poses. These challenges require including a Face Detection module that regresses bounding boxes and landmark coordinates for proper Face Alignment. This paper shows the effectiveness of Object Generation Attacks on Face Detection, dubbed Face Generation Attacks, and demonstrates for the first time a Landmark Shift Attack that backdoors the coordinate regression task performed by face detectors. We then offer mitigations against these vulnerabilities.