IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems

📅 2024-03-08
📈 Citations: 13
Influential: 3
📄 PDF
🤖 AI Summary
To address privacy leakage and privilege escalation risks arising from the lack of execution isolation for third-party plugins in large language model (LLM) application ecosystems, this paper proposes the first fine-grained natural language interaction isolation architecture tailored for LLM-based agent systems. Our approach integrates sandboxed execution environments, semantic-aware call interception, a least-privilege API gateway, and dynamic contextual isolation policies. It establishes verifiable trust boundaries across all natural language interactions—including inter-component, LLM-to-plugin, and plugin-to-plugin communication—for the first time. Experimental evaluation demonstrates that our solution effectively mitigates diverse data leakage and privilege escalation attacks without functional degradation. In 75% of query scenarios, the incurred security overhead remains below 30%, significantly enhancing both the security assurance and operational controllability of LLM systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) extended as systems, such as ChatGPT, have begun supporting third-party applications. These LLM apps leverage the de facto natural language-based automated execution paradigm of LLMs: that is, apps and their interactions are defined in natural language, provided access to user data, and allowed to freely interact with each other and the system. These LLM app ecosystems resemble the settings of earlier computing platforms, where there was insufficient isolation between apps and the system. Because third-party apps may not be trustworthy, and exacerbated by the imprecision of natural language interfaces, the current designs pose security and privacy risks for users. In this paper, we evaluate whether these issues can be addressed through execution isolation and what that isolation might look like in the context of LLM-based systems, where there are arbitrary natural language-based interactions between system components, between LLM and apps, and between apps. To that end, we propose IsolateGPT, a design architecture that demonstrates the feasibility of execution isolation and provides a blueprint for implementing isolation, in LLM-based systems. We evaluate IsolateGPT against a number of attacks and demonstrate that it protects against many security, privacy, and safety issues that exist in non-isolated LLM-based systems, without any loss of functionality. The performance overhead incurred by IsolateGPT to improve security is under 30% for three-quarters of tested queries.
Problem

Research questions and friction points this paper is trying to address.

Privacy
Security
Isolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

IsolateGPT
Execution Isolation
Large Language Models Security
🔎 Similar Papers
No similar papers found.