Don't Act Blindly: Robust GUI Automation via Action-Effect Verification and Self-Correction

πŸ“… 2026-04-07
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the vulnerability of existing vision-language model–driven GUI automation agents to error accumulation and ineffective repetition in uncertain environments due to a lack of action-effect verification. To enhance robustness, we propose VeriGUI, which introduces the first TVAE (Test-Verify-Act-Execute) reasoning framework incorporating explicit action-effect validation and self-correction mechanisms. Our approach employs a two-stage training strategy that integrates robust supervised fine-tuning, synthetic failure trajectory augmentation, GRPO reinforcement learning, and an asymmetric verification reward scheme. We further establish the first GUI automation benchmark specifically designed to evaluate agent robustness. Experimental results demonstrate that VeriGUI maintains competitive performance on standard tasks while significantly reducing failure loops and improving recovery success rates.
πŸ“ Abstract
Autonomous GUI agents based on vision-language models (VLMs) often assume deterministic environment responses, generating actions without verifying whether previous operations succeeded. In real-world settings with network latency, rendering delays, and system interruptions, this assumption leads to undetected action failures, repetitive ineffective behaviors, and catastrophic error accumulation. Moreover, learning robust recovery strategies is challenging due to the high cost of online interaction and the lack of real-time feedback in offline datasets.We propose VeriGUI (Verification-driven GUI Agent), which explicitly models action outcomes and recovery under noisy environments. VeriGUI introduces a Thinking--Verification--Action--Expectation (TVAE) framework to detect failures and guide corrective reasoning, and a two-stage training pipeline that combines Robust SFT with synthetic failure trajectories and GRPO with asymmetric verification rewards. We further construct a Robustness Benchmark based on AndroidControl to evaluate failure recognition and correction. Experiments show that VeriGUI significantly reduces failure loops and improves recovery success while maintaining competitive standard task performance.
Problem

Research questions and friction points this paper is trying to address.

GUI automation
action-effect verification
failure recovery
robustness
vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

action-effect verification
self-correction
TVAE framework
robust GUI automation
asymmetric verification rewards
πŸ”Ž Similar Papers
No similar papers found.
Yuzhe Zhang
Yuzhe Zhang
University of Science and Technology of China
Large Language ModelsNatural Language ProcessingAI Search
X
Xianwei Xue
Baidu Inc., Beijing, China
X
Xingyong Wu
Baidu Inc., Beijing, China
M
Mengke Chen
Baidu Inc., Beijing, China
C
Chen Liu
Baidu Inc., Beijing, China
X
Xinran He
Baidu Inc., Beijing, China
R
Run Shao
Baidu Inc., Beijing, China
F
Feiran Liu
Beijing University of Technology, Beijing, China
H
Huanmin Xu
Baidu Inc., Beijing, China
Q
Qiutong Pan
Baidu Inc., Beijing, China
H
Haiwei Wang
Baidu Inc., Beijing, China