Efficient Detection of Intermittent Job Failures Using Few-Shot Learning

📅 2025-07-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Intermittent job failures in CI/CD—caused by non-deterministic factors such as environmental fluctuations or test flakiness—are frequently misclassified as code defects; existing retry-based heuristics suffer from high false-positive rates. This paper introduces the first few-shot learning framework for intermittent failure detection: leveraging only a small number of human-annotated logs (e.g., 12), it fine-tunes a compact language model to generate high-fidelity semantic embeddings and trains a lightweight classifier to distinguish failure types. By prioritizing annotation quality over quantity, the approach eliminates reliance on large-scale labeled datasets and mitigates misclassification stemming from heuristic policy discrepancies. Evaluated across multiple real-world projects, it achieves F1 scores of 70–88%, substantially outperforming state-of-the-art baselines (34–52%). Results demonstrate strong effectiveness, generalizability, and practical deployability in industrial CI/CD settings.

Technology Category

Application Category

📝 Abstract
One of the main challenges developers face in the use of continuous integration (CI) and deployment pipelines is the occurrence of intermittent job failures, which result from unexpected non-deterministic issues (e.g., flaky tests or infrastructure problems) rather than regular code-related errors such as bugs. Prior studies developed machine-learning (ML) models trained on large datasets of job logs to classify job failures as either intermittent or regular. As an alternative to costly manual labeling of large datasets, the state-of-the-art (SOTA) approach leveraged a heuristic based on non-deterministic job reruns. However, this method mislabels intermittent job failures as regular in contexts where rerunning suspicious job failures is not an explicit policy, and therefore limits the SOTA's performance in practice. In fact, our manual analysis of 2,125 job failures from 5 industrial and 1 open-source projects reveals that, on average, 32% of intermittent job failures are mislabeled as regular. To address these limitations, this paper introduces a novel approach to intermittent job failure detection using few-shot learning (FSL). Specifically, we fine-tune a small language model using a few number of manually labeled log examples to generate rich embeddings, which are then used to train an ML classifier. Our FSL-based approach achieves 70-88% F1-score with only 12 shots in all projects, outperforming the SOTA, which proved ineffective (34-52% F1-score) in 4 projects. Overall, this study underlines the importance of data quality over quantity and provides a more efficient and practical framework for the detection of intermittent job failures in organizations.
Problem

Research questions and friction points this paper is trying to address.

Detect intermittent job failures in CI pipelines
Reduce mislabeling of failures without rerun heuristics
Improve accuracy with few-shot learning over large datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses few-shot learning for failure detection
Fine-tunes small language model with minimal labels
Generates embeddings to train ML classifier
🔎 Similar Papers
No similar papers found.