π€ AI Summary
This work addresses the vulnerability of Computer-Using Agents (CUAs) to prompt injection attacks, a challenge exacerbated by the difficulty of reconciling dynamic UI interactions with system security under conventional defenses. To this end, the authors propose a single-shot planning architecture wherein a trusted planner, prior to any exposure to potentially malicious content, generates a complete execution graph embedded with conditional branches, thereby guaranteeing control-flow integrity. This approach introduces structured predictability into UI workflows for the first time, enabling both system-level security isolation and support for complex interactions, while also uncovering and mitigating a novel class of branch redirection attacks. Evaluated on the OSWorld benchmark, the architecture achieves a 19% performance gain for small open-source models and retains up to 57% of the peak performance of state-of-the-art models, demonstrating the effective coexistence of security and practical utility.
π Abstract
AI agents are vulnerable to prompt injection attacks, where malicious content hijacks agent behavior to steal credentials or cause financial loss. The only known robust defense is architectural isolation that strictly separates trusted task planning from untrusted environment observations. However, applying this design to Computer Use Agents (CUAs) -- systems that automate tasks by viewing screens and executing actions -- presents a fundamental challenge: current agents require continuous observation of UI state to determine each action, conflicting with the isolation required for security. We resolve this tension by demonstrating that UI workflows, while dynamic, are structurally predictable. We introduce Single-Shot Planning for CUAs, where a trusted planner generates a complete execution graph with conditional branches before any observation of potentially malicious content, providing provable control flow integrity guarantees against arbitrary instruction injections. Although this architectural isolation successfully prevents instruction injections, we show that additional measures are needed to prevent Branch Steering attacks, which manipulate UI elements to trigger unintended valid paths within the plan. We evaluate our design on OSWorld, and retain up to 57% of the performance of frontier models while improving performance for smaller open-source models by up to 19%, demonstrating that rigorous security and utility can coexist in CUAs.