Prompt Injection as Role Confusion

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite safety training, large language models remain vulnerable to prompt injection attacks due to their reliance on textual content rather than source provenance to infer speaker roles, allowing adversarially crafted inputs to inherit legitimate permissions. This work identifies “role confusion” as the unifying mechanism underlying such attacks and introduces predictive probes based on internal role representations to expose a theoretical gap between secure interfaces and latent-space authority allocation. By designing novel role probes and simulating forged reasoning in both user and tool outputs, the proposed method achieves average attack success rates of 60% on StrongREJECT and 61% on agent-based data exfiltration tasks—substantially outperforming near-zero baselines. Moreover, the degree of role confusion effectively predicts attack success, offering a measurable indicator of model vulnerability.

Technology Category

Application Category

📝 Abstract
Language models remain vulnerable to prompt injection attacks despite extensive safety training. We trace this failure to role confusion: models infer roles from how text is written, not where it comes from. We design novel role probes to capture how models internally identify"who is speaking."These reveal why prompt injection works: untrusted text that imitates a role inherits that role's authority. We test this insight by injecting spoofed reasoning into user prompts and tool outputs, achieving average success rates of 60% on StrongREJECT and 61% on agent exfiltration, across multiple open- and closed-weight models with near-zero baselines. Strikingly, the degree of internal role confusion strongly predicts attack success before generation begins. Our findings reveal a fundamental gap: security is defined at the interface but authority is assigned in latent space. More broadly, we introduce a unifying, mechanistic framework for prompt injection, demonstrating that diverse prompt-injection attacks exploit the same underlying role-confusion mechanism.
Problem

Research questions and friction points this paper is trying to address.

prompt injection
role confusion
language models
security vulnerability
authority assignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

prompt injection
role confusion
language model security
mechanistic interpretability
adversarial attacks
🔎 Similar Papers
C
Charles Ye
Independent Researcher
J
Jasmine Cui
Independent Researcher
Dylan Hadfield-Menell
Dylan Hadfield-Menell
Massachusetts Institute of Technology
Artificial Intelligence