Zero-Shot Embedding Drift Detection: A Lightweight Defense Against Prompt Injections in LLMs

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of large language models (LLMs) to prompt injection attacks, wherein adversaries bypass alignment safeguards through indirect inputs to elicit harmful outputs. To counter this threat, the authors propose ZEDD, a lightweight, general-purpose, zero-shot detection framework that operates without access to model internals, prior knowledge of attacks, or task-specific fine-tuning. ZEDD identifies both direct and indirect injection attacks by quantifying the semantic shift—measured via cosine similarity—between adversarial and benign prompts in the embedding space. Evaluated on a newly curated LLMail-Inject dataset encompassing five attack categories, ZEDD achieves over 93% detection accuracy with a false positive rate below 3% across multiple LLMs, including Llama 3, Qwen 2, and Mistral, significantly outperforming existing methods.

Technology Category

Application Category

📝 Abstract
Prompt injection attacks have become an increasing vulnerability for LLM applications, where adversarial prompts exploit indirect input channels such as emails or user-generated content to circumvent alignment safeguards and induce harmful or unintended outputs. Despite advances in alignment, even state-of-the-art LLMs remain broadly vulnerable to adversarial prompts, underscoring the urgent need for robust, productive, and generalizable detection mechanisms beyond inefficient, model-specific patches. In this work, we propose Zero-Shot Embedding Drift Detection (ZEDD), a lightweight, low-engineering-overhead framework that identifies both direct and indirect prompt injection attempts by quantifying semantic shifts in embedding space between benign and suspect inputs. ZEDD operates without requiring access to model internals, prior knowledge of attack types, or task-specific retraining, enabling efficient zero-shot deployment across diverse LLM architectures. Our method uses adversarial-clean prompt pairs and measures embedding drift via cosine similarity to capture subtle adversarial manipulations inherent to real-world injection attacks. To ensure robust evaluation, we assemble and re-annotate the comprehensive LLMail-Inject dataset spanning five injection categories derived from publicly available sources. Extensive experiments demonstrate that embedding drift is a robust and transferable signal, outperforming traditional methods in detection accuracy and operational efficiency. With greater than 93% accuracy in classifying prompt injections across model architectures like Llama 3, Qwen 2, and Mistral and a false positive rate of<3%, our approach offers a lightweight, scalable defense layer that integrates into existing LLM pipelines, addressing a critical gap in securing LLM-powered systems to withstand adaptive adversarial threats.
Problem

Research questions and friction points this paper is trying to address.

prompt injection
LLM security
adversarial attacks
embedding drift
zero-shot detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Zero-Shot Detection
Embedding Drift
Prompt Injection
Lightweight Defense
Cosine Similarity
🔎 Similar Papers
No similar papers found.
A
Anirudh Sekar
M
Mrinal Agarwal
R
Rachel Sharma
A
Akitsugu Tanaka
J
Jasmine Zhang
A
Arjun Damerla
Algoverse AI Research
Kevin Zhu
Kevin Zhu
PhD, Stanford University; Professor of Business+Technology, University of California, San Diego
ITdatae-commercesoftwaredigital transformation