🤖 AI Summary
Large language models (LLMs) are vulnerable to jailbreaking attacks, yet existing defenses struggle to balance robustness and efficiency. This paper proposes AlignTree, a lightweight, activation-based jailbreak detection framework that operates entirely within the target LLM’s inference pipeline. Methodologically, AlignTree monitors internal neuron activations during generation—specifically extracting both linear representations aligned with refusal directions and nonlinear activation features via SVM-based dimensionality reduction—thereby constructing a low-dimensional, highly discriminative detection signal. It then employs an efficient random forest classifier to enable real-time, prompt-free, auxiliary-model-free online interception. Evaluated across multiple state-of-the-art LLMs—including Llama-3, Qwen, and Gemma—and standard benchmarks (ToxiGen, AdvBench), AlignTree achieves an average defense success rate exceeding 92%, while incurring less than 3% additional inference latency—outperforming prior SOTA methods in both accuracy and efficiency.
📝 Abstract
Large Language Models (LLMs) are vulnerable to adversarial attacks that bypass safety guidelines and generate harmful content. Mitigating these vulnerabilities requires defense mechanisms that are both robust and computationally efficient. However, existing approaches either incur high computational costs or rely on lightweight defenses that can be easily circumvented, rendering them impractical for real-world LLM-based systems. In this work, we introduce the AlignTree defense, which enhances model alignment while maintaining minimal computational overhead. AlignTree monitors LLM activations during generation and detects misaligned behavior using an efficient random forest classifier. This classifier operates on two signals: (i) the refusal direction -- a linear representation that activates on misaligned prompts, and (ii) an SVM-based signal that captures non-linear features associated with harmful content. Unlike previous methods, AlignTree does not require additional prompts or auxiliary guard models. Through extensive experiments, we demonstrate the efficiency and robustness of AlignTree across multiple LLMs and benchmarks.