JailDAM: Jailbreak Detection with Adaptive Memory for Vision-Language Model

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address three key challenges in jailbreak attack detection for multimodal large language models (MLLMs)—strong reliance on white-box access, high computational overhead, and scarcity of labeled harmful samples—this paper proposes the first test-time adaptive detection framework. Our method operates in a black-box setting without requiring internal model parameters or any pre-labeled harmful instances (zero-shot harmful sample exposure). It introduces a policy-driven unsafe knowledge representation coupled with dynamic memory updating, integrated with a lightweight uncertainty-aware module to enable efficient and robust real-time detection. Evaluated across multiple vision-language model (VLM) jailbreak benchmarks, our approach achieves state-of-the-art performance: it significantly improves detection accuracy while accelerating inference by 3.2× over existing methods, demonstrating strong practical deployability.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) excel in vision-language tasks but also pose significant risks of generating harmful content, particularly through jailbreak attacks. Jailbreak attacks refer to intentional manipulations that bypass safety mechanisms in models, leading to the generation of inappropriate or unsafe content. Detecting such attacks is critical to ensuring the responsible deployment of MLLMs. Existing jailbreak detection methods face three primary challenges: (1) Many rely on model hidden states or gradients, limiting their applicability to white-box models, where the internal workings of the model are accessible; (2) They involve high computational overhead from uncertainty-based analysis, which limits real-time detection, and (3) They require fully labeled harmful datasets, which are often scarce in real-world settings. To address these issues, we introduce a test-time adaptive framework called JAILDAM. Our method leverages a memory-based approach guided by policy-driven unsafe knowledge representations, eliminating the need for explicit exposure to harmful data. By dynamically updating unsafe knowledge during test-time, our framework improves generalization to unseen jailbreak strategies while maintaining efficiency. Experiments on multiple VLM jailbreak benchmarks demonstrate that JAILDAM delivers state-of-the-art performance in harmful content detection, improving both accuracy and speed.
Problem

Research questions and friction points this paper is trying to address.

Detects jailbreak attacks in vision-language models efficiently
Overcomes limitations of white-box dependency and high computation
Adapts dynamically to unseen jailbreak strategies without harmful data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memory-based approach for jailbreak detection
Policy-driven unsafe knowledge representations
Test-time adaptive framework for efficiency
Yi Nian
Yi Nian
Independent Researcher
NLPTrustworthy AI
Shenzhe Zhu
Shenzhe Zhu
University of Toronto
Trustworthy AIAI Agent
Y
Yuehan Qin
University of Southern California
L
Li Li
University of Southern California
Z
Ziyi Wang
University of Maryland
Chaowei Xiao
Chaowei Xiao
University of Wisconsin - Madison/NVIDIA
Trustworthy Machine LearningAdversarial Machine LearningAI SafetyRobust AISecurity
Y
Yue Zhao
University of Southern California