Automated Test Case Repair Using Language Models

📅 2024-01-12
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Software evolution frequently causes test failures, leading to high maintenance costs and low efficiency. This paper proposes TaRGet, the first framework to formalize test repair as a context-aware code translation task. Leveraging pre-trained models such as CodeT5 and CodeLlama, TaRGet automates repair via failure-context extraction, error-pattern-aware input construction, and two-stage fine-tuning. Its key contributions are: (1) a novel formalization of test repair as translation; (2) TaRBench—the first large-scale, empirically grounded benchmark comprising over 45K real-world test repairs; and (3) a reliability-prediction guidance mechanism, empirically validated for cross-project cold-start generalization. On TaRBench, TaRGet achieves a 66.1% exact-match repair rate—significantly outperforming state-of-the-art baselines—without requiring any project-specific training data.

Technology Category

Application Category

📝 Abstract
Ensuring the quality of software systems through testing is essential, yet maintaining test cases poses significant challenges and costs. The need for frequent updates to align with the evolving system under test often entails high complexity and cost for maintaining these test cases. Further, unrepaired broken test cases can degrade test suite quality and disrupt the software development process, wasting developers' time. To address this challenge, we present TaRGet (Test Repair GEneraTor), a novel approach leveraging pre-trained code language models for automated test case repair. TaRGet treats test repair as a language translation task, employing a two-step process to fine-tune a language model based on essential context data characterizing the test breakage. To evaluate our approach, we introduce TaRBench, a comprehensive benchmark we developed covering 45,373 broken test repairs across 59 open-source projects. Our results demonstrate TaRGet's effectiveness, achieving a 66.1% exact match accuracy. Furthermore, our study examines the effectiveness of TaRGet across different test repair scenarios. We provide a practical guide to predict situations where the generated test repairs might be less reliable. We also explore whether project-specific data is always necessary for fine-tuning and if our approach can be effective on new projects.
Problem

Research questions and friction points this paper is trying to address.

Software Testing
Maintenance Cost
Quality Assurance
Innovation

Methods, ideas, or system contributions that make the work stand out.

TaRGet
Machine Learning Test Repair
Translation Analogy