🤖 AI Summary
Contemporary multimodal large language models (MLLMs) face diverse jailbreak attacks, yet existing defenses suffer from poor generalizability and modality limitations. Method: We propose Test-time Immunity (TIM), the first test-time self-evolving defense framework that decouples dynamic attack detection from instruction-rejection-driven safety fine-tuning, enabling unified protection against cross-modal (text and image) jailbreaks. TIM incorporates gist token training, dynamic trigger detection, and modular parameter isolation to ensure both robustness and inference stability. Contribution/Results: Extensive evaluation across multiple LLMs and MLLMs demonstrates that TIM significantly improves jailbreak resistance while preserving original task performance—with negligible degradation (<0.5%). TIM is the first jailbreak defense framework achieving universality (across models and modalities), adaptivity (test-time evolution), and practical deployability.
📝 Abstract
While (multimodal) large language models (LLMs) have attracted widespread attention due to their exceptional capabilities, they remain vulnerable to jailbreak attacks. Various defense methods are proposed to defend against jailbreak attacks, however, they are often tailored to specific types of jailbreak attacks, limiting their effectiveness against diverse adversarial strategies. For instance, rephrasing-based defenses are effective against text adversarial jailbreaks but fail to counteract image-based attacks. To overcome these limitations, we propose a universal defense framework, termed Test-time IMmunization (TIM), which can adaptively defend against various jailbreak attacks in a self-evolving way. Specifically, TIM initially trains a gist token for efficient detection, which it subsequently applies to detect jailbreak activities during inference. When jailbreak attempts are identified, TIM implements safety fine-tuning using the detected jailbreak instructions paired with refusal answers. Furthermore, to mitigate potential performance degradation in the detector caused by parameter updates during safety fine-tuning, we decouple the fine-tuning process from the detection module. Extensive experiments on both LLMs and multimodal LLMs demonstrate the efficacy of TIM.