🤖 AI Summary
Large language model (LLM) agents are vulnerable to prompt injection attacks, which exploit fixed prompt structures to manipulate model behavior.
Method: This paper proposes Polymorphic Prompt Assembly (PPA), a lightweight, zero-overhead, architecture-agnostic defense mechanism that dynamically randomizes the syntactic structure of system prompts at runtime while preserving semantic integrity—requiring no model modification, additional training, or static rule-based filtering.
Contribution/Results: PPA represents the first structural-level, proactive, and dynamic defense against prompt injection by obfuscating prompt format rather than relying on guardrails or classifier-based guard models. Extensive evaluation demonstrates that PPA achieves over 98% defense success rate against state-of-the-art prompt injection attacks, significantly outperforming existing guardrail and guard-model approaches, while incurring less than 0.5% increase in inference latency.
📝 Abstract
LLM agents are widely used as agents for customer support, content generation, and code assistance. However, they are vulnerable to prompt injection attacks, where adversarial inputs manipulate the model's behavior. Traditional defenses like input sanitization, guard models, and guardrails are either cumbersome or ineffective. In this paper, we propose a novel, lightweight defense mechanism called Polymorphic Prompt Assembling (PPA), which protects against prompt injection with near-zero overhead. The approach is based on the insight that prompt injection requires guessing and breaking the structure of the system prompt. By dynamically varying the structure of system prompts, PPA prevents attackers from predicting the prompt structure, thereby enhancing security without compromising performance. We conducted experiments to evaluate the effectiveness of PPA against existing attacks and compared it with other defense methods.