SecureGaze: Defending Gaze Estimation Against Backdoor Attacks

📅 2025-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the security vulnerability of gaze estimation models to backdoor attacks—where triggers embedded in training data cause models to output manipulated continuous gaze directions under specific inputs—highlighting the inapplicability of existing classification-oriented defense methods. We propose the first dedicated backdoor defense framework for gaze estimation. Our approach innovatively introduces a trigger function reverse-reconstruction paradigm grounded in output continuity and global activation characteristics, integrating feature-response analysis with optimization-based inversion. Additionally, we design a dual-domain (digital and physical) robust verification mechanism. Under diverse backdoor attacks, our method achieves over 96% detection accuracy—significantly outperforming seven adapted state-of-the-art classification defenses—while maintaining zero false positives and requiring neither original training data nor labels.

Technology Category

Application Category

📝 Abstract
Gaze estimation models are widely used in applications such as driver attention monitoring and human-computer interaction. While many methods for gaze estimation exist, they rely heavily on data-hungry deep learning to achieve high performance. This reliance often forces practitioners to harvest training data from unverified public datasets, outsource model training, or rely on pre-trained models. However, such practices expose gaze estimation models to backdoor attacks. In such attacks, adversaries inject backdoor triggers by poisoning the training data, creating a backdoor vulnerability: the model performs normally with benign inputs, but produces manipulated gaze directions when a specific trigger is present. This compromises the security of many gaze-based applications, such as causing the model to fail in tracking the driver's attention. To date, there is no defense that addresses backdoor attacks on gaze estimation models. In response, we introduce SecureGaze, the first solution designed to protect gaze estimation models from such attacks. Unlike classification models, defending gaze estimation poses unique challenges due to its continuous output space and globally activated backdoor behavior. By identifying distinctive characteristics of backdoored gaze estimation models, we develop a novel and effective approach to reverse-engineer the trigger function for reliable backdoor detection. Extensive evaluations in both digital and physical worlds demonstrate that SecureGaze effectively counters a range of backdoor attacks and outperforms seven state-of-the-art defenses adapted from classification models.
Problem

Research questions and friction points this paper is trying to address.

Defend gaze estimation against backdoor attacks
Protect models from data poisoning vulnerabilities
Develop SecureGaze for backdoor detection in gaze estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

SecureGaze protects gaze estimation models
Reverse-engineers backdoor trigger function
Outperforms seven state-of-the-art defenses
🔎 Similar Papers
No similar papers found.