🤖 AI Summary
This study addresses the critical challenge of misaligned actions in Computer-Using Agents (CUAs), which can arise from external adversarial attacks or internal reasoning errors, thereby compromising safety and task reliability. The work presents the first systematic formulation of misaligned action detection in CUAs, encompassing both externally induced and internally generated deviations. To tackle this, the authors propose DeAction, a framework that enables real-time detection and iterative correction of misaligned actions prior to execution. Key contributions include MisActBench—the first benchmark of real-world agent trajectories with human-annotated, action-level alignment labels—and a lightweight, general-purpose defense mechanism integrating action alignment analysis, structured feedback, and efficient detection modeling. Experiments demonstrate that DeAction achieves an F1 score over 15% higher than baselines on MisActBench and reduces adversarial attack success rates by more than 90% in online evaluations, while preserving or even improving normal task success rates.
📝 Abstract
Computer-use agents (CUAs) have made tremendous progress in the past year, yet they still frequently produce misaligned actions that deviate from the user's original intent. Such misaligned actions may arise from external attacks (e.g., indirect prompt injection) or from internal limitations (e.g., erroneous reasoning). They not only expose CUAs to safety risks, but also degrade task efficiency and reliability. This work makes the first effort to define and study misaligned action detection in CUAs, with comprehensive coverage of both externally induced and internally arising misaligned actions. We further identify three common categories in real-world CUA deployment and construct MisActBench, a benchmark of realistic trajectories with human-annotated, action-level alignment labels. Moreover, we propose DeAction, a practical and universal guardrail that detects misaligned actions before execution and iteratively corrects them through structured feedback. DeAction outperforms all existing baselines across offline and online evaluations with moderate latency overhead: (1) On MisActBench, it outperforms baselines by over 15% absolute in F1 score; (2) In online evaluation, it reduces attack success rate by over 90% under adversarial settings while preserving or even improving task success rate in benign environments.