🤖 AI Summary
This study investigates the capability of large language models (LLMs) to autonomously generate effective textual inputs for Android GUI testing. Addressing 114 open-source apps, we propose a UI context-aware prompt engineering method to guide nine mainstream LLMs—including LLaMA and GPT variants—to produce inputs capable of triggering navigation transitions. We conduct the first large-scale empirical evaluation in this domain, revealing a statistically significant positive correlation between UI context completeness and page transition success rate. The best-performing model achieves a success rate of 50.58%–66.67%, substantially outperforming the invalid-input baseline (<23% defect detection). Contextual enhancement is empirically validated as critical to input quality. Our contributions include: (1) six actionable insights for applying LLMs in mobile testing practice, and (2) a reusable, context-driven framework for generating test-relevant textual inputs.
📝 Abstract
Mobile applications have become an essential part of our daily lives, making ensuring their quality an important activity. Graphical User Interface (GUI) testing is a quality assurance method that has frequently been used for mobile apps. When conducting GUI testing, it is important to generate effective text inputs for the text-input components. Some GUIs require these text inputs to be able to move from one page to the next: This can be a challenge to achieving complete UI exploration. Recently, Large Language Models (LLMs) have demonstrated excellent text-generation capabilities. To the best of our knowledge, there has not yet been any empirical study to evaluate different pre-trained LLMs' effectiveness at generating text inputs for mobile GUI testing. This paper reports on a large-scale empirical study that extensively investigates the effectiveness of nine state-of-the-art LLMs in Android text-input generation for UI pages. We collected 114 UI pages from 62 open-source Android apps and extracted contextual information from the UI pages to construct prompts for LLMs to generate text inputs. The experimental results show that some LLMs can generate more effective and higher-quality text inputs, achieving a 50.58% to 66.67% page-pass-through rate (PPTR). We also found that using more complete UI contextual information can increase the PPTRs of LLMs for generating text inputs. We conducted an experiment to evaluate the bug-detection capabilities of LLMs by directly generating invalid text inputs. We collected 37 real-world bugs related to text inputs. The results show that using LLMs to directly generate invalid text inputs for bug detection is insufficient: The bug-detection rates of the nine LLMs are all less than 23%. In addition, we also describe six insights gained regarding the use of LLMs for Android testing: These insights will benefit the Android testing community.