PISmith: Reinforcement Learning-based Red Teaming for Prompt Injection Defenses

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current defense mechanisms against prompt injection attacks in large language models lack rigorous evaluation under adaptive adversarial settings, often yielding a false sense of security. To address this gap, this work proposes PISmith, a black-box red-teaming framework based on reinforcement learning that systematically evaluates the robustness of such defenses by training adversarial language agents. PISmith introduces two key innovations: adaptive entropy regularization and dynamic advantage weighting, which collectively mitigate the sparse reward problem and enhance exploration and learning of rare but successful attacks. Experimental results demonstrate that PISmith uncovers critical vulnerabilities in state-of-the-art defenses across 13 benchmarks, consistently outperforming seven baseline attack methods—including static, search-based, and reinforcement learning approaches—and achieves strong attack performance against both open- and closed-source models such as GPT-4o-mini and GPT-5-nano in agent environments like InjecAgent and AgentDojo.

Technology Category

Application Category

📝 Abstract
Prompt injection poses serious security risks to real-world LLM applications, particularly autonomous agents. Although many defenses have been proposed, their robustness against adaptive attacks remains insufficiently evaluated, potentially creating a false sense of security. In this work, we propose PISmith, a reinforcement learning (RL)-based red-teaming framework that systematically assesses existing prompt-injection defenses by training an attack LLM to optimize injected prompts in a practical black-box setting, where the attacker can only query the defended LLM and observe its outputs. We find that directly applying standard GRPO to attack strong defenses leads to sub-optimal performance due to extreme reward sparsity -- most generated injected prompts are blocked by the defense, causing the policy's entropy to collapse before discovering effective attack strategies, while the rare successes cannot be learned effectively. In response, we introduce adaptive entropy regularization and dynamic advantage weighting to sustain exploration and amplify learning from scarce successes. Extensive evaluation on 13 benchmarks demonstrates that state-of-the-art prompt injection defenses remain vulnerable to adaptive attacks. We also compare PISmith with 7 baselines across static, search-based, and RL-based attack categories, showing that PISmith consistently achieves the highest attack success rates. Furthermore, PISmith achieves strong performance in agentic settings on InjecAgent and AgentDojo against both open-source and closed-source LLMs (e.g., GPT-4o-mini and GPT-5-nano). Our code is available at https://github.com/albert-y1n/PISmith.
Problem

Research questions and friction points this paper is trying to address.

prompt injection
LLM security
adaptive attacks
defense evaluation
red teaming
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Prompt Injection
Red Teaming
Adaptive Entropy Regularization
Dynamic Advantage Weighting
🔎 Similar Papers
No similar papers found.