SecInfer: Preventing Prompt Injection via Inference-time Scaling

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prompt injection attacks pose a severe threat to large language models (LLMs), yet existing fine-tuning–based defenses exhibit insufficient robustness against strong and adaptive attacks. This paper introduces SecInfer—the first training-free defense method that adapts the inference-time scaling paradigm to prompt injection mitigation. Its core mechanism leverages systematic prompting to guide diverse sampling, generating multiple response paths; it then performs task-directed response aggregation and dynamically selects the optimal output. SecInfer requires no model fine-tuning, thereby enhancing both security and generalization. Extensive experiments demonstrate that SecInfer achieves significantly higher defense success rates than state-of-the-art (SOTA) defenses and other inference-time scaling approaches across diverse attack settings—including black-box, white-box, and adaptive attacks.

Technology Category

Application Category

📝 Abstract
Prompt injection attacks pose a pervasive threat to the security of Large Language Models (LLMs). State-of-the-art prevention-based defenses typically rely on fine-tuning an LLM to enhance its security, but they achieve limited effectiveness against strong attacks. In this work, we propose emph{SecInfer}, a novel defense against prompt injection attacks built on emph{inference-time scaling}, an emerging paradigm that boosts LLM capability by allocating more compute resources for reasoning during inference. SecInfer consists of two key steps: emph{system-prompt-guided sampling}, which generates multiple responses for a given input by exploring diverse reasoning paths through a varied set of system prompts, and emph{target-task-guided aggregation}, which selects the response most likely to accomplish the intended task. Extensive experiments show that, by leveraging additional compute at inference, SecInfer effectively mitigates both existing and adaptive prompt injection attacks, outperforming state-of-the-art defenses as well as existing inference-time scaling approaches.
Problem

Research questions and friction points this paper is trying to address.

Preventing prompt injection attacks in LLMs
Enhancing security via inference-time compute scaling
Selecting secure responses through multi-prompt sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses inference-time scaling to enhance security
Employs system-prompt-guided sampling for diverse responses
Applies target-task-guided aggregation for optimal selection
🔎 Similar Papers
No similar papers found.