🤖 AI Summary
This study presents the first systematic evaluation of the feasibility of using large language models (LLMs) to identify flaky tests—tests that exhibit non-deterministic outcomes across repeated executions of the same code version—based solely on test code without additional contextual information. Addressing the challenge flaky tests pose to automated quality assurance, the research combines both general-purpose and code-specific LLMs with diverse prompting strategies, conducting classification experiments on two benchmark datasets complemented by manual content analysis. Results show that even the best-performing model-prompt combination achieves only marginally better-than-random accuracy. Further qualitative analysis reveals that most test code inherently lacks sufficient cues for reliable flakiness detection, demonstrating a fundamental limitation in relying exclusively on test code for this task.
📝 Abstract
Flaky tests yield inconsistent results when they are repeatedly executed on the same code revision. They interfere with automated quality assurance of code changes and hinder efficient software testing. Previous work evaluated approaches to train machine learning models to classify flaky tests based on identifiers in the test code. However, the resulting classifiers have been shown to lack generalizability, hindering their applicability in practical environments. Recently, pre-trained Large Language Models (LLMs) have shown the capability to generalize across various tasks. Thus, they represent a promising approach to address the generalizability problem of previous approaches. In this study, we evaluated three LLMs (two general-purpose models, one code-specific model) using three prompting techniques on two benchmark datasets from prior studies on flaky test classification. Furthermore, we manually investigated 50 samples from the given datasets to determine whether classifying flaky tests based only on test code is feasible for humans. Our findings indicate that LLMs struggle to classify flaky tests given only the test code. The results of our best prompt-model combination were only marginally better than random guessing. In our manual analysis, we found that the test code does not necessarily contain sufficient information for a flakiness classification. Our findings motivate future work to evaluate LLMs for flakiness classification with additional context, for example, using retrieval-augmented generation or agentic AI.