🤖 AI Summary
Prompt injection attacks pose a severe threat to large language models (LLMs), yet existing fine-tuning–based defenses exhibit insufficient robustness against strong and adaptive attacks. This paper introduces SecInfer—the first training-free defense method that adapts the inference-time scaling paradigm to prompt injection mitigation. Its core mechanism leverages systematic prompting to guide diverse sampling, generating multiple response paths; it then performs task-directed response aggregation and dynamically selects the optimal output. SecInfer requires no model fine-tuning, thereby enhancing both security and generalization. Extensive experiments demonstrate that SecInfer achieves significantly higher defense success rates than state-of-the-art (SOTA) defenses and other inference-time scaling approaches across diverse attack settings—including black-box, white-box, and adaptive attacks.
📝 Abstract
Prompt injection attacks pose a pervasive threat to the security of Large Language Models (LLMs). State-of-the-art prevention-based defenses typically rely on fine-tuning an LLM to enhance its security, but they achieve limited effectiveness against strong attacks. In this work, we propose emph{SecInfer}, a novel defense against prompt injection attacks built on emph{inference-time scaling}, an emerging paradigm that boosts LLM capability by allocating more compute resources for reasoning during inference. SecInfer consists of two key steps: emph{system-prompt-guided sampling}, which generates multiple responses for a given input by exploring diverse reasoning paths through a varied set of system prompts, and emph{target-task-guided aggregation}, which selects the response most likely to accomplish the intended task. Extensive experiments show that, by leveraging additional compute at inference, SecInfer effectively mitigates both existing and adaptive prompt injection attacks, outperforming state-of-the-art defenses as well as existing inference-time scaling approaches.