Exploring the Integration of Large Language Models in Industrial Test Maintenance Processes

📅 2024-09-10
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address high manual effort, delayed response, and incomplete coverage in industrial software test maintenance, this paper proposes and empirically validates two novel multi-agent architectures leveraging large language models (LLMs) to accurately identify test cases requiring maintenance—and to assist in their automated addition, deletion, or modification—following code changes. The study systematically distills critical triggering conditions and practical deployment constraints for LLMs in industrial settings, and conducts empirical evaluation using real-world data from Ericsson AB. Results demonstrate significant reductions in manual test maintenance effort, improved timeliness of issue response, and enhanced test coverage completeness. This work represents the first application of multi-agent collaboration to industrial-scale test maintenance, establishing a reusable methodological framework and empirical foundation for deploying LLMs in high-reliability software engineering contexts.

Technology Category

Application Category

📝 Abstract
Much of the cost and effort required during the software testing process is invested in performing test maintenance - the addition, removal, or modification of test cases to keep the test suite in sync with the system-under-test or to otherwise improve its quality. Tool support could reduce the cost - and improve the quality - of test maintenance by automating aspects of the process or by providing guidance and support to developers. In this study, we explore the capabilities and applications of large language models (LLMs) - complex machine learning models adapted to textual analysis - to support test maintenance. We conducted a case study at Ericsson AB where we explored the triggers that indicate the need for test maintenance, the actions that LLMs can take, and the considerations that must be made when deploying LLMs in an industrial setting. We also proposed and demonstrated implementations of two multi-agent architectures that can predict which test cases require maintenance following a change to the source code. Collectively, these contributions advance our theoretical and practical understanding of how LLMs can be deployed to benefit industrial test maintenance processes.
Problem

Research questions and friction points this paper is trying to address.

Automating test maintenance to reduce costs and improve quality
Exploring LLMs' capabilities for industrial test maintenance support
Proposing multi-agent architecture for predicting test maintenance needs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using LLMs for automated test maintenance
Multi-agent architecture predicts test changes
LLMs analyze triggers and actions in testing
🔎 Similar Papers
No similar papers found.
L
Ludvig Lemner
Chalmers University of Technology and Ericsson AB, Sweden
L
Linnea Wahlgren
Chalmers University of Technology and Ericsson AB, Sweden
Gregory Gay
Gregory Gay
Chalmers University of Technology and University of Gothenburg
Software TestingSearch-Based Software EngineeringAI4SEAutomated Software Engineering
N
N. Mohammadiha
Ericsson AB and Chalmers University of Technology, Sweden
J
Jingxiong Liu
Ericsson AB, Sweden
J
Joakim Wennerberg
Ericsson AB, Sweden