MirrorGuard: Toward Secure Computer-Use Agents via Simulation-to-Real Reasoning Correction

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of Computer-Using Agents (CUAs) to malicious instruction or visual prompt injection attacks, which can trigger unsafe reasoning and harmful actions. Existing defenses often compromise utility by prematurely terminating tasks. To overcome this limitation, the authors propose MirrorGuard—a plug-and-play defense framework that introduces, for the first time, a neural-symbolic hybrid pipeline for pure-text GUI simulation. This approach efficiently generates high-risk interaction trajectories without requiring real-system execution, enabling agents to dynamically intercept and correct unsafe reasoning chains during actual deployment. Evaluated on ByteDance’s UI-TARS system, MirrorGuard reduces the unsafe operation rate from 66.5% to 13.0%, substantially outperforming the state-of-the-art GuardAgent (53.9%) while achieving a lower false rejection rate, thereby effectively balancing safety and task completion capability.

Technology Category

Application Category

📝 Abstract
Large foundation models are integrated into Computer Use Agents (CUAs), enabling autonomous interaction with operating systems through graphical user interfaces (GUIs) to perform complex tasks. This autonomy introduces serious security risks: malicious instructions or visual prompt injections can trigger unsafe reasoning and cause harmful system-level actions. Existing defenses, such as detection-based blocking, prevent damage but often abort tasks prematurely, reducing agent utility. In this paper, we present MirrorGuard, a plug-and-play defense framework that uses simulation-based training to improve CUA security in the real world. To reduce the cost of large-scale training in operating systems, we propose a novel neural-symbolic simulation pipeline, which generates realistic, high-risk GUI interaction trajectories entirely in a text-based simulated environment, which captures unsafe reasoning patterns and potential system hazards without executing real operations. In the simulation environment, MirrorGuard learns to intercept and rectify insecure reasoning chains of CUAs before they produce and execute unsafe actions. In real-world testing, extensive evaluations across diverse benchmarks and CUA architectures show that MirrorGuard significantly mitigates security risks. For instance, on the ByteDance UI-TARS system, it reduces the unsafe rate from 66.5% to 13.0% while maintaining a marginal false refusal rate (FRR). In contrast, the state-of-the-art GuardAgent only achieves a reduction to 53.9% and suffers from a 15.4% higher FRR. Our work proves that simulation-derived defenses can provide robust, real-world protection while maintaining the fundamental utility of the agent. Our code and model are publicly available at https://bmz-q-q.github.io/MirrorGuard/.
Problem

Research questions and friction points this paper is trying to address.

Computer Use Agents
Security Risks
Visual Prompt Injection
Unsafe Reasoning
Autonomous GUI Interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

simulation-to-real reasoning
neural-symbolic simulation
Computer Use Agents
secure reasoning correction
GUI interaction safety
🔎 Similar Papers
No similar papers found.
Wenqi Zhang
Wenqi Zhang
Zhejiang University
Language ModelMultimodal LearningEmbodied Agents
Y
Yulin Shen
Fudan University, Shanghai, China
C
Changyue Jiang
Fudan University, Shanghai Innovation Institute, Shanghai, China
Jiarun Dai
Jiarun Dai
Assistant Professor, Fudan Univerisity
Vulnerability DetectionAI System Security
Geng Hong
Geng Hong
Fudan University
SecurityCybercrimeLLM Security and Safety
X
Xudong Pan
Fudan University, Shanghai Innovation Institute, Shanghai, China