Automated Test Transfer Across Android Apps Using Large Language Models

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of low reusability, poor cross-app migration success rates, and heavy reliance on manual assertion writing in Android inter-app UI testing, this paper introduces large language models (LLMs) to UI test migration for the first time, proposing an end-to-end automated migration method. Our approach integrates UI hierarchy analysis, element semantic understanding, prompt-engineering-driven script rewriting, and dynamic assertion generation to achieve functional semantic alignment and robust cross-application migration. Evaluated on a real-world application corpus, our method achieves a 97.5% migration success rate—outperforming the best baseline by 9.1 percentage points—while reducing manual effort by 91.1% and improving efficiency by 38.2%. The core contribution lies in overcoming the limitations of conventional layout- or widget-matching approaches under significant UI divergence and weak domain generalization, enabling semantic-level generalization and dynamic oracle generation.

Technology Category

Application Category

📝 Abstract
The pervasiveness of mobile apps in everyday life necessitates robust testing strategies to ensure quality and efficiency, especially through end-to-end usage-based tests for mobile apps' user interfaces (UIs). However, manually creating and maintaining such tests can be costly for developers. Since many apps share similar functionalities beneath diverse UIs, previous works have shown the possibility of transferring UI tests across different apps within the same domain, thereby eliminating the need for writing the tests manually. However, these methods have struggled to accommodate real-world variations, often facing limitations in scenarios where source and target apps are not very similar or fail to accurately transfer test oracles. This paper introduces an innovative technique, LLMigrate, which leverages Large Language Models (LLMs) to efficiently transfer usage-based UI tests across mobile apps. Our experimental evaluation shows LLMigrate can achieve a 97.5% success rate in automated test transfer, reducing the manual effort required to write tests from scratch by 91.1%. This represents an improvement of 9.1% in success rate and 38.2% in effort reduction compared to the best-performing prior technique, setting a new benchmark for automated test transfer.
Problem

Research questions and friction points this paper is trying to address.

Automating UI test transfer across Android apps
Reducing manual effort in test creation
Handling real-world app variations effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs for automated UI test transfer
Achieves 97.5% success rate in test transfer
Reduces manual test writing effort by 91.1%
🔎 Similar Papers
No similar papers found.
B
Benyamin Beyzaei
University of California, Irvine, USA
Saghar Talebipour
Saghar Talebipour
University of Southern California, USA
G
Ghazal Rafiei
University of Southern California, USA
N
N. Medvidović
University of Southern California, USA
Sam Malek
Sam Malek
Professor, University of California, Irvine
Software EngineeringSoftware ArchitectureMobile ComputingSoftware SecurityTesting and Analysis