Say One Thing, Do Another? Diagnosing Reasoning-Execution Gaps in VLM-Powered Mobile-Use Agents

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies a pervasive “reasoning–execution gap” in vision-language model (VLM)-driven mobile agents: their chain-of-thought (CoT) reasoning outputs frequently diverge from actual executed actions, leading to user mistrust and safety risks. Addressing the limitation of existing evaluations—which focus solely on execution accuracy—we propose the first framework jointly assessing reasoning–execution consistency. Crucially, we introduce Ground-Truth Alignment (GTA), a novel metric that explicitly disentangles reasoning gaps (misalignment between CoT and ground-truth intent) from execution gaps (misalignment between action and CoT). Extensive experiments across diverse mobile interaction tasks reveal that execution gaps substantially outnumber reasoning gaps; scaling model size mitigates but does not eliminate the gap; and GTA robustly exposes systematic deficiencies across mainstream VLMs, providing an interpretable diagnostic tool for building trustworthy agents.

Technology Category

Application Category

📝 Abstract
Mobile-use agents powered by vision-language models (VLMs) have shown great potential in interpreting natural language instructions and generating corresponding actions based on mobile graphical user interface. Recent studies suggest that incorporating chain-of-thought (CoT) reasoning tends to improve the execution accuracy. However, existing evaluations emphasize execution accuracy while neglecting whether CoT reasoning aligns with ground-truth actions. This oversight fails to assess potential reasoning-execution gaps, which in turn foster over-trust: users relying on seemingly plausible CoTs may unknowingly authorize harmful actions, potentially resulting in financial loss or trust crisis. In this work, we introduce a new evaluation framework to diagnose reasoning-execution gaps. At its core lies Ground-Truth Alignment (GTA), which measures whether the action implied by a CoT matches the ground-truth action. By combining GTA with the standard Exact Match (EM) metric, we jointly assess both the reasoning accuracy and execution accuracy. This joint perspective reveals two types of reasoning-execution gaps: (i) Execution Gap (EG), where the reasoning correctly identifies the correct action but execution fails, and (ii) Reasoning Gap (RG), where execution succeeds but reasoning process conflicts with the actual execution. Experimental results across a wide range of mobile interaction tasks reveal that reasoning-execution gaps are prevalent, with execution gaps occurring more frequently than reasoning gaps. Moreover, while scaling up model size reduces the overall gap, sizable execution gaps persist even in the largest models. Further analysis shows that our framework reliably reflects systematic EG/RG patterns in state-of-the-art models. These findings offer concrete diagnostics and support the development of more trustworthy mobile-use agents.
Problem

Research questions and friction points this paper is trying to address.

Diagnosing reasoning-execution gaps in VLM-powered mobile agents
Evaluating alignment between chain-of-thought reasoning and ground-truth actions
Identifying execution gaps where reasoning succeeds but execution fails
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Ground-Truth Alignment evaluation framework
Measures reasoning-execution gaps in mobile agents
Combines reasoning and execution accuracy metrics
🔎 Similar Papers
No similar papers found.