Rethinking Jailbreak Detection of Large Vision Language Models with Representational Contrastive Scoring

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large vision-language models (LVLMs) are vulnerable to unseen jailbreak attacks, while existing defenses suffer from poor generalizability or high computational overhead. Method: We propose a lightweight detection framework grounded in the geometric structure of internal representations. For the first time, we leverage intermediate-layer representations of LVLMs as core safety signals, introducing the Representation Contrastive Scoring (RCS) paradigm. RCS learns contrastive projections on safety-critical layers to distinguish malicious intent from benign novel inputs—overcoming the high false-rejection limitation of conventional one-class anomaly detection. Our method integrates Mahalanobis/KNN-based contrastive detection, representation-space projection, and contrastive statistical scoring, requiring no fine-tuning or additional parameters. Results: Under a rigorous evaluation protocol for unseen attacks, our approach achieves state-of-the-art performance with low computational cost and strong interpretability. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both generalizable to novel threats and efficient for practical deployment. Many current strategies fall short, either targeting specific attack patterns, which limits generalization, or imposing high computational overhead. While lightweight anomaly-detection methods offer a promising direction, we find that their common one-class design tends to confuse novel benign inputs with malicious ones, leading to unreliable over-rejection. To address this, we propose Representational Contrastive Scoring (RCS), a framework built on a key insight: the most potent safety signals reside within the LVLM's own internal representations. Our approach inspects the internal geometry of these representations, learning a lightweight projection to maximally separate benign and malicious inputs in safety-critical layers. This enables a simple yet powerful contrastive score that differentiates true malicious intent from mere novelty. Our instantiations, MCD (Mahalanobis Contrastive Detection) and KCD (K-nearest Contrastive Detection), achieve state-of-the-art performance on a challenging evaluation protocol designed to test generalization to unseen attack types. This work demonstrates that effective jailbreak detection can be achieved by applying simple, interpretable statistical methods to the appropriate internal representations, offering a practical path towards safer LVLM deployment. Our code is available on Github https://github.com/sarendis56/Jailbreak_Detection_RCS.
Problem

Research questions and friction points this paper is trying to address.

Detect multimodal jailbreak attacks on Large Vision-Language Models
Improve generalization to unseen attack types efficiently
Reduce false rejection of novel benign inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

RCS uses LVLM internal representations for detection
Lightweight projection separates benign and malicious inputs
MCD and KCD achieve state-of-the-art generalization performance
P
Peichun Hua
Washington University in St. Louis
H
Hao Li
Washington University in St. Louis
Shanghao Shi
Shanghao Shi
Virginia Tech
Network SecurityMachine Learning SecurityCPS and IoT Security
Z
Zhiyuan Yu
Texas A&M University
N
Ning Zhang
Washington University in St. Louis