🤖 AI Summary
This study addresses a critical security vulnerability in Computer-Using Agents (CUAs), where perceptual errors in screen understanding may misinterpret ordinary clicks as privileged operations. The work presents the first formal modeling of CUA perception failures as a security threat and introduces an independent verification mechanism decoupled from the agent’s perception loop. This mechanism employs a dual-channel approach—combining visual and semantic analysis—by integrating object detection with knowledge base–driven textual reasoning to jointly validate the clicked target against the user’s intended action. Experimental evaluations demonstrate that the proposed method significantly outperforms single-channel baselines under adversarial attacks, real-world GUI screenshots, and agent interaction trajectories, effectively preventing erroneous actions caused by perceptual misjudgments.
📝 Abstract
Computer-using agents (CUAs) act directly on graphical user interfaces, yet their perception of the screen is often unreliable. Existing work largely treats these failures as performance limitations, asking whether an action succeeds, rather than whether the agent is acting on the correct object at all. We argue that this is fundamentally a security problem. We formalize the visual confused deputy: a failure mode in which an agent authorizes an action based on a misperceived screen state, due to grounding errors, adversarial screenshot manipulation, or time-of-check-to-time-of-use (TOCTOU) races. This gap is practically exploitable: even simple screen-level manipulations can redirect routine clicks into privileged actions while remaining indistinguishable from ordinary agent mistakes. To mitigate this threat, we propose the first guardrail that operates outside the agent's perceptual loop. Our method, dual-channel contrastive classification, independently evaluates (1) the visual click target and (2) the agent's reasoning about the action against deployment-specific knowledge bases, and blocks execution if either channel indicates risk. The key insight is that these two channels capture complementary failure modes: visual evidence detects target-level mismatches, while textual reasoning reveals dangerous intent behind visually innocuous controls. Across controlled attacks, real GUI screenshots, and agent traces, the combined guardrail consistently outperforms either channel alone. Our results suggest that CUA safety requires not only better action generation, but independent verification of what the agent believes it is clicking and why. Materials are provided\footnote{Model, benchmark, and code: https://github.com/vllm-project/semantic-router}.