AlignTree: Efficient Defense Against LLM Jailbreak Attacks

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are vulnerable to jailbreaking attacks, yet existing defenses struggle to balance robustness and efficiency. This paper proposes AlignTree, a lightweight, activation-based jailbreak detection framework that operates entirely within the target LLM’s inference pipeline. Methodologically, AlignTree monitors internal neuron activations during generation—specifically extracting both linear representations aligned with refusal directions and nonlinear activation features via SVM-based dimensionality reduction—thereby constructing a low-dimensional, highly discriminative detection signal. It then employs an efficient random forest classifier to enable real-time, prompt-free, auxiliary-model-free online interception. Evaluated across multiple state-of-the-art LLMs—including Llama-3, Qwen, and Gemma—and standard benchmarks (ToxiGen, AdvBench), AlignTree achieves an average defense success rate exceeding 92%, while incurring less than 3% additional inference latency—outperforming prior SOTA methods in both accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these vulnerabilities requires defense mechanisms that are both robust and computationally efficient. However, existing approaches either incur high computational costs or rely on lightweight defenses that can be easily circumvented, rendering them impractical for real-world LLM-based systems. In this work, we introduce the AlignTree defense, which enhances model alignment while maintaining minimal computational overhead. AlignTree monitors LLM activations during generation and detects misaligned behavior using an efficient random forest classifier. This classifier operates on two signals: (i) the refusal direction -- a linear representation that activates on misaligned prompts, and (ii) an SVM-based signal that captures non-linear features associated with harmful content. Unlike previous methods, AlignTree does not require additional prompts or auxiliary guard models. Through extensive experiments, we demonstrate the efficiency and robustness of AlignTree across multiple LLMs and benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Defending LLMs against jailbreak attacks that bypass safety guidelines
Reducing computational overhead while maintaining robust security measures
Detecting harmful content without auxiliary prompts or guard models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses random forest classifier for detection
Combines linear refusal and SVM signals
Monitors activations without auxiliary models
G
Gil Goren
Blavatnik School of Computer Science, Tel Aviv University
S
Shahar Katz
Blavatnik School of Computer Science, Tel Aviv University
Lior Wolf
Lior Wolf
The School of Computer Science at Tel Aviv University