ACE: A Security Architecture for LLM-Integrated App Systems

📅 2025-04-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Malicious third-party applications in LLM-based integrated systems introduce novel security threats—including planning integrity violations, execution availability collapse, and sensitive data leakage. Method: We propose the first Abstract-Concrete-Execution (ACE) three-stage decoupled architecture, enabling collaborative defense through trusted abstract modeling during planning and strong isolation guarantees during execution. Our approach leverages four core techniques: static information-flow analysis, abstract execution plan generation, cross-application data and capability barriers, and structured plan verification. Contribution/Results: ACE ensures control-flow integrity and strict isolation of sensitive data. Evaluated on the INJECAGENT benchmark and newly devised adversarial attacks, it achieves zero vulnerability escapes, demonstrating significant end-to-end security improvements for LLM application systems.

Technology Category

Application Category

📝 Abstract
LLM-integrated app systems extend the utility of Large Language Models (LLMs) with third-party apps that are invoked by a system LLM using interleaved planning and execution phases to answer user queries. These systems introduce new attack vectors where malicious apps can cause integrity violation of planning or execution, availability breakdown, or privacy compromise during execution. In this work, we identify new attacks impacting the integrity of planning, as well as the integrity and availability of execution in LLM-integrated apps, and demonstrate them against IsolateGPT, a recent solution designed to mitigate attacks from malicious apps. We propose Abstract-Concrete-Execute (ACE), a new secure architecture for LLM-integrated app systems that provides security guarantees for system planning and execution. Specifically, ACE decouples planning into two phases by first creating an abstract execution plan using only trusted information, and then mapping the abstract plan to a concrete plan using installed system apps. We verify that the plans generated by our system satisfy user-specified secure information flow constraints via static analysis on the structured plan output. During execution, ACE enforces data and capability barriers between apps, and ensures that the execution is conducted according to the trusted abstract plan. We show experimentally that our system is secure against attacks from the INJECAGENT benchmark, a standard benchmark for control flow integrity in the face of indirect prompt injection attacks, and our newly introduced attacks. Our architecture represents a significant advancement towards hardening LLM-based systems containing system facilities of varying levels of trustworthiness.
Problem

Research questions and friction points this paper is trying to address.

Addresses security risks in LLM-integrated app systems
Proposes ACE architecture for secure planning and execution
Mitigates attacks on integrity and availability in LLM apps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples planning into abstract and concrete phases
Enforces data and capability barriers between apps
Verifies secure information flow via static analysis
🔎 Similar Papers