🤖 AI Summary
Existing defense methods struggle to distinguish maliciously misaligned instructions from benign, task-aligned ones, often leading to erroneous judgments. This work proposes the first tri-class detection framework from the perspective of instruction alignment, leveraging attention map features from large language models to effectively differentiate among misaligned instructions, aligned instructions, and non-instructional inputs. To support this approach, we construct the first systematic benchmark dataset encompassing all three input types and design an attention-based tri-class detector. Experimental results demonstrate that our method significantly outperforms baseline approaches on both our newly curated dataset and existing benchmarks, achieving high precision in identifying misalignment attacks while reliably preserving the legitimacy of aligned instructions.
📝 Abstract
% Prompt injection attacks insert malicious instructions into an LLM's input to steer it toward an attacker-chosen task instead of the intended one. Existing detection defenses typically classify any input with instruction as malicious, leading to misclassification of benign inputs containing instructions that align with the intended task. In this work, we account for the instruction hierarchy and distinguish among three categories: inputs with misaligned instructions, inputs with aligned instructions, and non-instruction inputs. We introduce AlignSentinel, a three-class classifier that leverages features derived from LLM's attention maps to categorize inputs accordingly. To support evaluation, we construct the first systematic benchmark containing inputs from all three categories. Experiments on both our benchmark and existing ones--where inputs with aligned instructions are largely absent--show that AlignSentinel accurately detects inputs with misaligned instructions and substantially outperforms baselines.