An Analysis of LLM Fine-Tuning and Few-Shot Learning for Flaky Test Detection and Classification

📅 2025-02-04
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of detecting and classifying flaky tests—non-deterministic test cases that intermittently pass or fail—in automated testing. We propose FlakyXbert, a lightweight and efficient framework based on a Siamese neural network architecture for few-shot learning (FSL). The work systematically compares full fine-tuning of large language models (LLMs) against FSL in terms of accuracy–cost trade-offs. Experiments on the FlakyCat and IDoFT benchmarks demonstrate that while full LLM fine-tuning achieves high accuracy, FlakyXbert attains competitive performance using only a small number of labeled examples, substantially reducing both annotation effort and computational overhead. To our knowledge, this is the first empirical study to delineate the practical applicability boundary of FSL versus full fine-tuning specifically for flaky test detection. Our findings provide a deployable, resource-efficient solution for settings with limited labeled data and computational capacity.

Technology Category

Application Category

📝 Abstract
Flaky tests exhibit non-deterministic behavior during execution and they may pass or fail without any changes to the program under test. Detecting and classifying these flaky tests is crucial for maintaining the robustness of automated test suites and ensuring the overall reliability and confidence in the testing. However, flaky test detection and classification is challenging due to the variability in test behavior, which can depend on environmental conditions and subtle code interactions. Large Language Models (LLMs) offer promising approaches to address this challenge, with fine-tuning and few-shot learning (FSL) emerging as viable techniques. With enough data fine-tuning a pre-trained LLM can achieve high accuracy, making it suitable for organizations with more resources. Alternatively, we introduce FlakyXbert, an FSL approach that employs a Siamese network architecture to train efficiently with limited data. To understand the performance and cost differences between these two methods, we compare fine-tuning on larger datasets with FSL in scenarios restricted by smaller datasets. Our evaluation involves two existing flaky test datasets, FlakyCat and IDoFT. Our results suggest that while fine-tuning can achieve high accuracy, FSL provides a cost-effective approach with competitive accuracy, which is especially beneficial for organizations or projects with limited historical data available for training. These findings underscore the viability of both fine-tuning and FSL in flaky test detection and classification with each suited to different organizational needs and resource availability.
Problem

Research questions and friction points this paper is trying to address.

Detecting flaky tests in software
Classifying flaky tests efficiently
Comparing fine-tuning vs few-shot learning methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM fine-tuning for flaky test detection
Few-shot learning with Siamese network
Comparative analysis of fine-tuning vs FSL
🔎 Similar Papers
No similar papers found.