Large Language Models for Mobile GUI Text Input Generation: An Empirical Study

📅 2024-04-13
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the capability of large language models (LLMs) to autonomously generate effective textual inputs for Android GUI testing. Addressing 114 open-source apps, we propose a UI context-aware prompt engineering method to guide nine mainstream LLMs—including LLaMA and GPT variants—to produce inputs capable of triggering navigation transitions. We conduct the first large-scale empirical evaluation in this domain, revealing a statistically significant positive correlation between UI context completeness and page transition success rate. The best-performing model achieves a success rate of 50.58%–66.67%, substantially outperforming the invalid-input baseline (<23% defect detection). Contextual enhancement is empirically validated as critical to input quality. Our contributions include: (1) six actionable insights for applying LLMs in mobile testing practice, and (2) a reusable, context-driven framework for generating test-relevant textual inputs.

Technology Category

Application Category

📝 Abstract
Mobile applications have become an essential part of our daily lives, making ensuring their quality an important activity. Graphical User Interface (GUI) testing is a quality assurance method that has frequently been used for mobile apps. When conducting GUI testing, it is important to generate effective text inputs for the text-input components. Some GUIs require these text inputs to be able to move from one page to the next: This can be a challenge to achieving complete UI exploration. Recently, Large Language Models (LLMs) have demonstrated excellent text-generation capabilities. To the best of our knowledge, there has not yet been any empirical study to evaluate different pre-trained LLMs' effectiveness at generating text inputs for mobile GUI testing. This paper reports on a large-scale empirical study that extensively investigates the effectiveness of nine state-of-the-art LLMs in Android text-input generation for UI pages. We collected 114 UI pages from 62 open-source Android apps and extracted contextual information from the UI pages to construct prompts for LLMs to generate text inputs. The experimental results show that some LLMs can generate more effective and higher-quality text inputs, achieving a 50.58% to 66.67% page-pass-through rate (PPTR). We also found that using more complete UI contextual information can increase the PPTRs of LLMs for generating text inputs. We conducted an experiment to evaluate the bug-detection capabilities of LLMs by directly generating invalid text inputs. We collected 37 real-world bugs related to text inputs. The results show that using LLMs to directly generate invalid text inputs for bug detection is insufficient: The bug-detection rates of the nine LLMs are all less than 23%. In addition, we also describe six insights gained regarding the use of LLMs for Android testing: These insights will benefit the Android testing community.
Problem

Research questions and friction points this paper is trying to address.

Evaluate LLMs for mobile GUI text input generation
Assess LLMs' effectiveness in Android UI testing
Investigate bug-detection capabilities of LLMs in Android apps
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for mobile GUI text input
Empirical study on nine LLMs
Contextual UI information enhances text generation
🔎 Similar Papers
No similar papers found.
C
Chenhui Cui
School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China
T
Tao Li
School of Computer Science and Engineering, Macau University of Science and Technology, Taipa, Macau 999078, China
J
Junjie Wang
Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
Chunyang Chen
Chunyang Chen
Professor at Department of Computer Science, Technical University of Munich
Software EngineeringDeep LearningHuman Computer InteractionLLM4SEGUI
Dave Towey
Dave Towey
University of Nottingham Ningbo China
Software TestingMetamorphic TestingAdaptive Random TestingTechnology-enhanced Learning and InstructionComputer Literacy
Rubing Huang
Rubing Huang
Macau University of Science and Technology
AI for Software EngineeringSoftware Engineering for AISoftware TestingAI Applications