🤖 AI Summary
Multimodal large vision-language models (LVLMs) are vulnerable to unseen jailbreak attacks, while existing defenses suffer from poor generalizability or high computational overhead.
Method: We propose a lightweight detection framework grounded in the geometric structure of internal representations. For the first time, we leverage intermediate-layer representations of LVLMs as core safety signals, introducing the Representation Contrastive Scoring (RCS) paradigm. RCS learns contrastive projections on safety-critical layers to distinguish malicious intent from benign novel inputs—overcoming the high false-rejection limitation of conventional one-class anomaly detection. Our method integrates Mahalanobis/KNN-based contrastive detection, representation-space projection, and contrastive statistical scoring, requiring no fine-tuning or additional parameters.
Results: Under a rigorous evaluation protocol for unseen attacks, our approach achieves state-of-the-art performance with low computational cost and strong interpretability. The code is publicly available.
📝 Abstract
Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both generalizable to novel threats and efficient for practical deployment. Many current strategies fall short, either targeting specific attack patterns, which limits generalization, or imposing high computational overhead. While lightweight anomaly-detection methods offer a promising direction, we find that their common one-class design tends to confuse novel benign inputs with malicious ones, leading to unreliable over-rejection. To address this, we propose Representational Contrastive Scoring (RCS), a framework built on a key insight: the most potent safety signals reside within the LVLM's own internal representations. Our approach inspects the internal geometry of these representations, learning a lightweight projection to maximally separate benign and malicious inputs in safety-critical layers. This enables a simple yet powerful contrastive score that differentiates true malicious intent from mere novelty. Our instantiations, MCD (Mahalanobis Contrastive Detection) and KCD (K-nearest Contrastive Detection), achieve state-of-the-art performance on a challenging evaluation protocol designed to test generalization to unseen attack types. This work demonstrates that effective jailbreak detection can be achieved by applying simple, interpretable statistical methods to the appropriate internal representations, offering a practical path towards safer LVLM deployment. Our code is available on Github https://github.com/sarendis56/Jailbreak_Detection_RCS.