🤖 AI Summary
Existing end-to-end automated program repair (APR) methods suffer from low repair success rates, primarily because monolithic large language models struggle to collaboratively execute heterogeneous subtasks—including fault localization, patch generation, and validation. This paper introduces the first dedicated APR system based on cooperative small language models (SLMs), featuring a novel component-level collaboration architecture: (i) decoupled two-stage suspicious line localization; (ii) joint generation–critique modeling; (iii) dual verification—comprising assertion-aware and assertion-agnostic test generation plus correctness adjudication; and (iv) majority-voting ensemble integration. The system employs three 14B-parameter SLMs, enabling lightweight, efficient training and inference. Evaluated on the SWE-bench-Verified benchmark, it achieves a 46% resolution rate—substantially surpassing prior specialized APR models—while utilizing the smallest model scale and least training resources among comparable approaches.
📝 Abstract
Motivated by the success of general-purpose large language models (LLMs) in software patching, recent works started to train specialized patching models. Most works trained one model to handle the end-to-end patching pipeline (including issue localization, patch generation, and patch validation). However, it is hard for a small model to handle all tasks, as different sub-tasks have different workflows and require different expertise. As such, by using a 70 billion model, SOTA methods can only reach up to 41% resolved rate on SWE-bench-Verified. Motivated by the collaborative nature, we propose Co-PatcheR, the first collaborative patching system with small and specialized reasoning models for individual components. Our key technique novelties are the specific task designs and training recipes. First, we train a model for localization and patch generation. Our localization pinpoints the suspicious lines through a two-step procedure, and our generation combines patch generation and critique. We then propose a hybrid patch validation that includes two models for crafting issue-reproducing test cases with and without assertions and judging patch correctness, followed by a majority vote-based patch selection. Through extensive evaluation, we show that Co-PatcheR achieves 46% resolved rate on SWE-bench-Verified with only 3 x 14B models. This makes Co-PatcheR the best patcher with specialized models, requiring the least training resources and the smallest models. We conduct a comprehensive ablation study to validate our recipes, as well as our choice of training data number, model size, and testing-phase scaling strategy.