Defects4C: Benchmarking Large Language Model Repair Capability with C/C++ Bugs

๐Ÿ“… 2025-10-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing automated program repair research for C/C++ is hindered by the lack of high-quality, open-source benchmarks. Method: We introduce Defects4Cโ€”the first large-scale, executable repair benchmark for C/C++โ€”comprising 248 defective functions and 102 vulnerable functions, all drawn from real-world open-source projects and accompanied by reproducible test cases. Constructed from 9 million defect-related commit records, the dataset underwent rigorous manual curation and automated processing, with reproducibility validated via a standardized testing framework. Contribution/Results: Leveraging Defects4C, we conduct a systematic evaluation of 24 state-of-the-art large language models on C/C++ repair tasks, revealing their capabilities and critical limitations. Defects4C enables model fine-tuning and fair, standardized comparison, thereby filling a fundamental gap in benchmarking for C/C++ program repair and advancing the field.

Technology Category

Application Category

๐Ÿ“ Abstract
Automated Program Repair (APR) plays a critical role in enhancing the quality and reliability of software systems. While substantial progress has been made in Java-based APR, largely facilitated by benchmarks like Defects4J, there remains a significant gap in research on C/C++ program repair, despite the widespread use of C/C++ and the prevalence of associated vulnerabilities. This gap is primarily due to the lack of high-quality, open-source benchmarks tailored for C/C++. To address this issue, we introduce Defects4C, a comprehensive and executable benchmark specifically designed for C/C++ program repair. Our dataset is constructed from real-world C/C++ repositories and includes a large collection of bug-relevant commits (9M in total), 248 high-quality buggy functions, and 102 vulnerable functions, all paired with test cases for reproduction. These resources enable rigorous evaluation of repair techniques and support the retraining of learning-based approaches for enhanced performance. Using Defects4C, we conduct a comprehensive empirical study evaluating the effectiveness of 24 state-of-the-art large language models (LLMs) in repairing C/C++ faults. Our findings offer valuable insights into the strengths and limitations of current LLM-based APR techniques in this domain, highlighting both the need for more robust methods and the critical role of Defects4C in advancing future research
Problem

Research questions and friction points this paper is trying to address.

Addressing the lack of C/C++ benchmarks for program repair
Evaluating large language models' capability to fix C/C++ bugs
Providing real-world bug data to support APR research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defects4C provides executable C/C++ benchmark
Includes real-world buggy and vulnerable functions
Evaluates 24 LLMs for automated program repair
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jian Wang
Singapore Management University, Singapore
Xiaofei Xie
Xiaofei Xie
Singapore Management University
Software EngineeringLoop AnalysisTestingDeep Learning
Q
Qiang Hu
Tianjin University, China
Shangqing Liu
Shangqing Liu
Nanjing University
Software EngineeringDeep Learning
Jiongchi Yu
Jiongchi Yu
Singapore Management University
Software EngineeringSecurity
J
Jiaolong Kong
Singapore Management University, Singapore
Y
Yi Li
Nanyang Technological University, Singapore