Defending Against Prompt Injection With a Few DefensiveTokens

📅 2025-07-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are vulnerable to prompt injection attacks during external data interaction. Method: This paper proposes a lightweight, learnable runtime defense mechanism that dynamically prepends a small number of optimizable defense tokens to inputs; their embeddings are end-to-end optimized, and integrated with defensive prompting strategies—without modifying model parameters. Contribution/Results: Our approach achieves, for the first time, test-time defense performance on par with training-time methods, enabling configurable trade-offs among security, latency, and accuracy. Experiments show that inserting only 2–4 defense tokens substantially mitigates diverse prompt injection attacks, with average accuracy degradation under 1% and negligible inference overhead. The implementation is open-sourced and demonstrates strong practical deployability.

Technology Category

Application Category

📝 Abstract
When large language model (LLM) systems interact with external data to perform complex tasks, a new attack, namely prompt injection, becomes a significant threat. By injecting instructions into the data accessed by the system, the attacker is able to override the initial user task with an arbitrary task directed by the attacker. To secure the system, test-time defenses, e.g., defensive prompting, have been proposed for system developers to attain security only when needed in a flexible manner. However, they are much less effective than training-time defenses that change the model parameters. Motivated by this, we propose DefensiveToken, a test-time defense with prompt injection robustness comparable to training-time alternatives. DefensiveTokens are newly inserted as special tokens, whose embeddings are optimized for security. In security-sensitive cases, system developers can append a few DefensiveTokens before the LLM input to achieve security with a minimal utility drop. In scenarios where security is less of a concern, developers can simply skip DefensiveTokens; the LLM system remains the same as there is no defense, generating high-quality responses. Thus, DefensiveTokens, if released alongside the model, allow a flexible switch between the state-of-the-art (SOTA) utility and almost-SOTA security at test time. The code is available at https://github.com/Sizhe-Chen/DefensiveToken.
Problem

Research questions and friction points this paper is trying to address.

Defends against LLM prompt injection attacks
Optimizes token embeddings for enhanced security
Balances security and utility flexibly
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inserting special defensive tokens for security
Optimizing token embeddings for robust defense
Flexibly switching between utility and security
🔎 Similar Papers
No similar papers found.