🤖 AI Summary
Large language models (LLMs) are vulnerable to prompt injection attacks during external data interaction. Method: This paper proposes a lightweight, learnable runtime defense mechanism that dynamically prepends a small number of optimizable defense tokens to inputs; their embeddings are end-to-end optimized, and integrated with defensive prompting strategies—without modifying model parameters. Contribution/Results: Our approach achieves, for the first time, test-time defense performance on par with training-time methods, enabling configurable trade-offs among security, latency, and accuracy. Experiments show that inserting only 2–4 defense tokens substantially mitigates diverse prompt injection attacks, with average accuracy degradation under 1% and negligible inference overhead. The implementation is open-sourced and demonstrates strong practical deployability.
📝 Abstract
When large language model (LLM) systems interact with external data to perform complex tasks, a new attack, namely prompt injection, becomes a significant threat. By injecting instructions into the data accessed by the system, the attacker is able to override the initial user task with an arbitrary task directed by the attacker. To secure the system, test-time defenses, e.g., defensive prompting, have been proposed for system developers to attain security only when needed in a flexible manner. However, they are much less effective than training-time defenses that change the model parameters. Motivated by this, we propose DefensiveToken, a test-time defense with prompt injection robustness comparable to training-time alternatives. DefensiveTokens are newly inserted as special tokens, whose embeddings are optimized for security. In security-sensitive cases, system developers can append a few DefensiveTokens before the LLM input to achieve security with a minimal utility drop. In scenarios where security is less of a concern, developers can simply skip DefensiveTokens; the LLM system remains the same as there is no defense, generating high-quality responses. Thus, DefensiveTokens, if released alongside the model, allow a flexible switch between the state-of-the-art (SOTA) utility and almost-SOTA security at test time. The code is available at https://github.com/Sizhe-Chen/DefensiveToken.