π€ AI Summary
This work addresses the vulnerability of multimodal diffusion language models (MDLMs) to backdoor attacks and the absence of effective defenses. To mitigate this threat, the authors propose DiSP, a novel framework that operates during inference by selectively masking suspicious visual tokens to neutralize trigger effects. DiSP further leverages the compromised model itself for self-purification of training data, followed by fine-tuning to restore model performance. Notably, this approach achieves the first backdoor removal for MDLMs without requiring auxiliary models or clean reference data, relying solely on the modelβs intrinsic diffusion mechanism for purification. Experimental results demonstrate that DiSP reduces attack success rates from over 90% to below 5% while preserving the modelβs original capabilities on benign tasks.
π Abstract
Multimodal Diffusion Language Models (MDLMs) have recently emerged as a competitive alternative to their autoregressive counterparts. Yet their vulnerability to backdoor attacks remains largely unexplored. In this work, we show that well-established data-poisoning pipelines can successfully implant backdoors into MDLMs, enabling attackers to manipulate model behavior via specific triggers while maintaining normal performance on clean inputs. However, defense strategies effective to these models are yet to emerge. To bridge this gap, we introduce a backdoor defense framework for MDLMs named DiSP (Diffusion Self-Purification). DiSP is driven by a key observation: selectively masking certain vision tokens at inference time can neutralize a backdoored model's trigger-induced behaviors and restore normal functionality. Building on this, we purify the poisoned dataset using the compromised model itself, then fine-tune the model on the purified data to recover it to a clean one. Given such a specific design, DiSP can remove backdoors without requiring any auxiliary models or clean reference data. Extensive experiments demonstrate that our approach effectively mitigates backdoor effects, reducing the attack success rate (ASR) from over 90% to typically under 5%, while maintaining model performance on benign tasks.