🤖 AI Summary
Large language models (LLMs) deployed in AI agents remain highly vulnerable to prompt injection attacks, undermining system security and reliability. Method: This paper proposes a type-directed privilege isolation mechanism that enforces strict data-type constraints and canonical normalization to map untrusted inputs—regardless of origin—into restricted, execution-free types (e.g., immutable token sequences), thereby blocking attack semantics at the type level. Crucially, the approach requires no external detectors, model fine-tuning, or architectural modifications. Contribution/Results: Empirical evaluation across diverse real-world scenarios demonstrates that the method achieves complete robustness against all known prompt injection variants—including direct, indirect, and multi-step LD attacks—while preserving LLMs’ full functional capability and task performance. It eliminates the trade-off between security and usability inherent in prior defenses, significantly enhancing the safety, reliability, and interoperability of LLM-based agents without compromising expressiveness or compatibility.
📝 Abstract
Large language models (LLMs) have become increasingly popular due to their ability to interact with unstructured content. As such, LLMs are now a key driver behind the automation of language processing systems, such as AI agents. Unfortunately, these advantages have come with a vulnerability to prompt injections, an attack where an adversary subverts the LLM's intended functionality with an injected task. Past approaches have proposed detectors and finetuning to provide robustness, but these techniques are vulnerable to adaptive attacks or cannot be used with state-of-the-art models. To this end we propose type-directed privilege separation for LLMs, a method that systematically prevents prompt injections. We restrict the ability of an LLM to interact with third-party data by converting untrusted content to a curated set of data types; unlike raw strings, each data type is limited in scope and content, eliminating the possibility for prompt injections. We evaluate our method across several case studies and find that designs leveraging our principles can systematically prevent prompt injection attacks while maintaining high utility.