NegBLEURT Forest: Leveraging Inconsistencies for Detecting Jailbreak Attacks

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Jailbreaking attacks against large language models (LLMs) evade conventional rule-based filtering due to their semantic subtlety and syntactic variability. Method: This paper proposes an unsupervised detection framework centered on semantic inconsistency—specifically, the discrepancy between successful and failed model responses to the same prompt. It introduces the first integration of negation-aware semantic similarity (NegBLEURT) with Isolation Forest to quantify such inconsistency without requiring ground-truth labels, model fine-tuning, or manual threshold calibration. Contribution/Results: The method demonstrates strong cross-model robustness and generalization across diverse LLMs (e.g., LLaMA-2, Vicuna, Qwen). Evaluated on multiple benchmarks, it achieves state-of-the-art or near-SOTA accuracy, significantly outperforming existing baselines. Moreover, it exhibits high stability under input perturbations and distributional shifts, confirming its practical viability for real-world deployment.

Technology Category

Application Category

📝 Abstract
Jailbreak attacks designed to bypass safety mechanisms pose a serious threat by prompting LLMs to generate harmful or inappropriate content, despite alignment with ethical guidelines. Crafting universal filtering rules remains difficult due to their inherent dependence on specific contexts. To address these challenges without relying on threshold calibration or model fine-tuning, this work introduces a semantic consistency analysis between successful and unsuccessful responses, demonstrating that a negation-aware scoring approach captures meaningful patterns. Building on this insight, a novel detection framework called NegBLEURT Forest is proposed to evaluate the degree of alignment between outputs elicited by adversarial prompts and expected safe behaviors. It identifies anomalous responses using the Isolation Forest algorithm, enabling reliable jailbreak detection. Experimental results show that the proposed method consistently achieves top-tier performance, ranking first or second in accuracy across diverse models using the crafted dataset, while competing approaches exhibit notable sensitivity to model and data variations.
Problem

Research questions and friction points this paper is trying to address.

Detecting jailbreak attacks that bypass LLM safety mechanisms
Assessing semantic consistency between adversarial and safe responses
Identifying harmful content without threshold calibration or fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Negation-aware scoring captures semantic inconsistencies
Isolation Forest algorithm identifies anomalous responses
Framework evaluates alignment between adversarial and safe outputs
L
Lama Sleem
University of Luxembourg, Luxembourg, Luxembourg
J
Jerome Francois
University of Luxembourg, Luxembourg, Luxembourg
L
Lujun Li
University of Luxembourg, Luxembourg, Luxembourg
N
Nathan Foucher
Institut National Polytechnique de Toulouse, Toulouse, France
N
Niccolo Gentile
Foyer S.A., Leudelange, Luxembourg
Radu State
Radu State
University of Luxembourg
Network SecurityNetwork and Service management