π€ AI Summary
This work addresses the vulnerability of large vision-language models to backdoor attacks during fine-tuning, where adversaries embed triggers that hijack cross-modal attention to activate malicious behaviors. The authors propose CleanSight, a training-free, plug-and-play test-time defense that pioneers the insight that backdoor mechanisms stem from βattention hijacking.β Building on this observation, CleanSight introduces a novel defense paradigm based on attention purification: by analyzing the ratio of visual-to-textual attention, it selectively prunes suspicious visual tokens exhibiting abnormally high attention weights, thereby effectively disrupting backdoor activation. Extensive experiments demonstrate that CleanSight consistently outperforms existing pixel-level purification methods across diverse datasets and attack types, while preserving model performance on both clean and poisoned samples.
π Abstract
Despite the strong multimodal performance, large vision-language models (LVLMs) are vulnerable during fine-tuning to backdoor attacks, where adversaries insert trigger-embedded samples into the training data to implant behaviors that can be maliciously activated at test time. Existing defenses typically rely on retraining backdoored parameters (e.g., adapters or LoRA modules) with clean data, which is computationally expensive and often degrades model performance. In this work, we provide a new mechanistic understanding of backdoor behaviors in LVLMs: the trigger does not influence prediction through low-level visual patterns, but through abnormal cross-modal attention redistribution, where trigger-bearing visual tokens steal attention away from the textual context - a phenomenon we term attention stealing. Motivated by this, we propose CleanSight, a training-free, plug-and-play defense that operates purely at test time. CleanSight (i) detects poisoned inputs based on the relative visual-text attention ratio in selected cross-modal fusion layers, and (ii) purifies the input by selectively pruning the suspicious high-attention visual tokens to neutralize the backdoor activation. Extensive experiments show that CleanSight significantly outperforms existing pixel-based purification defenses across diverse datasets and backdoor attack types, while preserving the model's utility on both clean and poisoned samples.