Better Privilege Separation for Agents by Restricting Data Types

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) deployed in AI agents remain highly vulnerable to prompt injection attacks, undermining system security and reliability. Method: This paper proposes a type-directed privilege isolation mechanism that enforces strict data-type constraints and canonical normalization to map untrusted inputs—regardless of origin—into restricted, execution-free types (e.g., immutable token sequences), thereby blocking attack semantics at the type level. Crucially, the approach requires no external detectors, model fine-tuning, or architectural modifications. Contribution/Results: Empirical evaluation across diverse real-world scenarios demonstrates that the method achieves complete robustness against all known prompt injection variants—including direct, indirect, and multi-step LD attacks—while preserving LLMs’ full functional capability and task performance. It eliminates the trade-off between security and usability inherent in prior defenses, significantly enhancing the safety, reliability, and interoperability of LLM-based agents without compromising expressiveness or compatibility.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have become increasingly popular due to their ability to interact with unstructured content. As such, LLMs are now a key driver behind the automation of language processing systems, such as AI agents. Unfortunately, these advantages have come with a vulnerability to prompt injections, an attack where an adversary subverts the LLM's intended functionality with an injected task. Past approaches have proposed detectors and finetuning to provide robustness, but these techniques are vulnerable to adaptive attacks or cannot be used with state-of-the-art models. To this end we propose type-directed privilege separation for LLMs, a method that systematically prevents prompt injections. We restrict the ability of an LLM to interact with third-party data by converting untrusted content to a curated set of data types; unlike raw strings, each data type is limited in scope and content, eliminating the possibility for prompt injections. We evaluate our method across several case studies and find that designs leveraging our principles can systematically prevent prompt injection attacks while maintaining high utility.
Problem

Research questions and friction points this paper is trying to address.

Addressing LLM vulnerability to prompt injection attacks
Restricting third-party data interaction through type conversion
Systematically preventing injected tasks while maintaining utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Type-directed privilege separation for LLMs
Converting untrusted content to curated data types
Limiting data type scope to prevent injections
🔎 Similar Papers
No similar papers found.