🤖 AI Summary
This work identifies a strong positive correlation between pretraining objective strength and backdoor persistence in vision-language models: stronger pretraining objectives—e.g., higher zero-shot transfer performance—significantly degrade the effectiveness of mainstream backdoor mitigation methods such as CleanCLIP. Using the CC3M and CC6M datasets, we systematically train multiple contrastive learning models with varying objective strengths and conduct comprehensive ablation studies—including poisoned sample removal and hyperparameter sensitivity analysis—to empirically demonstrate, for the first time, CleanCLIP’s failure under strong pretraining objectives. This finding challenges the prevailing assumption of universal efficacy among existing mitigation techniques and reveals a critical trade-off between representation quality and backdoor robustness. Our results provide both theoretical insight and empirical evidence essential for designing secure pretraining paradigms and building trustworthy multimodal AI systems.
📝 Abstract
Despite the advanced capabilities of contemporary machine learning (ML) models, they remain vulnerable to adversarial and backdoor attacks. This vulnerability is particularly concerning in real-world deployments, where compromised models may exhibit unpredictable behavior in critical scenarios. Such risks are heightened by the prevalent practice of collecting massive, internet-sourced datasets for training multimodal models, as these datasets may harbor backdoors. Various techniques have been proposed to mitigate the effects of backdooring in multimodal models, such as CleanCLIP, which is the current state-of-the-art approach. In this work, we demonstrate that the efficacy of CleanCLIP in mitigating backdoors is highly dependent on the particular objective used during model pre-training. We observe that stronger pre-training objectives that lead to higher zero-shot classification performance correlate with harder to remove backdoors behaviors. We show this by training multimodal models on two large datasets consisting of 3 million (CC3M) and 6 million (CC6M) datapoints, under various pre-training objectives, followed by poison removal using CleanCLIP. We find that CleanCLIP, even with extensive hyperparameter tuning, is ineffective in poison removal when stronger pre-training objectives are used. Our findings underscore critical considerations for ML practitioners who train models using large-scale web-curated data and are concerned about potential backdoor threats.