๐ค AI Summary
Multimodal contrastive learning models (e.g., CLIP) are highly vulnerable to backdoor attacks due to excessive encoding of class-agnostic visual features, undermining robustness against input perturbations. To address this, we propose Repulsive Visual Prompt Tuning (RVPT), the first lightweight defense that integrates deep visual prompt tuning with a feature-repulsion lossโrequiring no poisoned data and only a few clean samples. RVPT operates via cross-modal contrastive optimization while fine-tuning merely 0.27% of model parameters. It reduces attack success rates from 67.53% to 2.76% across multiple benchmarks, substantially outperforming existing few-shot defenses. Our core contributions are: (i) identifying class-agnostic feature encoding as the fundamental mechanism underlying backdoor vulnerability in multimodal models; and (ii) introducing the first prompt-based defense paradigm that achieves high efficacy with zero poisoning, minimal parameter overhead, and strong generalization under scarce clean data.
๐ Abstract
Multimodal contrastive learning models (e.g., CLIP) can learn high-quality representations from large-scale image-text datasets, yet they exhibit significant vulnerabilities to backdoor attacks, raising serious safety concerns. In this paper, we disclose that CLIP's vulnerabilities primarily stem from its excessive encoding of class-irrelevant features, which can compromise the model's visual feature resistivity to input perturbations, making it more susceptible to capturing the trigger patterns inserted by backdoor attacks. Inspired by this finding, we propose Repulsive Visual Prompt Tuning (RVPT), a novel defense approach that employs specially designed deep visual prompt tuning and feature-repelling loss to eliminate excessive class-irrelevant features while simultaneously optimizing cross-entropy loss to maintain clean accuracy. Unlike existing multimodal backdoor defense methods that typically require the availability of poisoned data or involve fine-tuning the entire model, RVPT leverages few-shot downstream clean samples and only tunes a small number of parameters. Empirical results demonstrate that RVPT tunes only 0.27% of the parameters relative to CLIP, yet it significantly outperforms state-of-the-art baselines, reducing the attack success rate from 67.53% to 2.76% against SoTA attacks and effectively generalizing its defensive capabilities across multiple datasets.