🤖 AI Summary
In Android GUI testing, large language models (LLMs) often misidentify task termination points when generating replayable test scripts from natural-language task descriptions, leading to premature or excessive action sequences. To address this, we propose VisiDroid, a multimodal multi-agent framework that jointly leverages screen images and textual inputs for state assessment—marking the first such integration in GUI test generation. VisiDroid employs vision-language collaborative decision-making, iterative action generation, and dynamic completion detection to achieve end-to-end, reliable script synthesis. Evaluated on a standard benchmark, VisiDroid achieves an action accuracy of 87.3%, outperforming the best prior baseline by 23.5 percentage points. The framework significantly improves reliability, robustness, and cross-application generalizability of automated GUI test script generation.
📝 Abstract
In Android GUI testing, generating an action sequence for a task that can be replayed as a test script is common. Generating sequences of actions and respective test scripts from task goals described in natural language can eliminate the need for manually writing test scripts. However, existing approaches based on large language models (LLM) often struggle with identifying the final action, and either end prematurely or continue past the final screen. In this paper, we introduce VisiDroid, a multi-modal, LLM-based, multi-agent framework that iteratively determines the next action and leverages visual images of screens to detect the task's completeness. The multi-modal approach enhances our model in two significant ways. First, this approach enables it to avoid prematurely terminating a task when textual content alone provides misleading indications of task completion. Additionally, visual input helps the tool avoid errors when changes in the GUI do not directly affect functionality toward task completion, such as adjustments to font sizes or colors. Second, the multi-modal approach also ensures the tool not progress beyond the final screen, which might lack explicit textual indicators of task completion but could display a visual element indicating task completion, which is common in GUI apps. Our evaluation shows that VisiDroid achieves an accuracy of 87.3%, outperforming the best baseline relatively by 23.5%. We also demonstrate that our multi-modal framework with images and texts enables the LLM to better determine when a task is completed.