🤖 AI Summary
This work addresses the challenge in software engineering of selecting the optimal implementation from multiple candidate solutions, a task where existing large language models often fall short due to their limited ability to holistically evaluate and synthesize proposals, thereby introducing implementation risks. The authors propose SWE-Manager, the first approach to frame proposal selection as a reasoning task. Leveraging an 8B-parameter model trained via reinforcement learning, SWE-Manager simulates the decision-making of technical managers by evaluating, selecting, and synthesizing the best “golden” proposal from multiple repair candidates—without executing any code. By integrating contextual understanding, comparative reasoning across proposals, and natural language synthesis, the method transcends the limitations of conventional code generation or repair techniques. On the SWE-Lancer Manager benchmark, SWE-Manager achieves a selection accuracy of 53.21% and a yield rate of 57.75%, corresponding to a cumulative gain of $152,750, significantly outperforming strong baselines including GPT-5.
📝 Abstract
Large language model (LLM) research in software engineering has largely focused on tasks such as code generation and bug repair. In practice, teams often draft multiple candidate proposals for fixing an issue and then deliberate on one golden proposal for implementation. This selection requires not only assessing the issue's scope, impact, and urgency, but also a clear understanding of each proposal's strengths and weaknesses. A good selection could make issue resolution more reliable while reducing regression and operational risk, whereas a poor choice can increase risk and even cause unpredictable failures. We first conduct a manual study of real-world issues to characterize the rationales maintainers use when selecting among competing proposals. Motivated by these findings, we introduce SWE-Manager, a joint selection and synthesis approach that selects the best proposal and synthesizes a golden proposal. SWE-Manager is an 8B model trained via reinforcement learning (RL) to compare proposals, justify its choice, and synthesize a golden proposal for implementation. We view proposal selection as a reasoning task, mirroring how technical managers review competing proposals by weighing issue context and each proposal's solution without executing code or running tests. On the SWE-Lancer Manager benchmark, SWE-Manager achieves 53.21 selection accuracy and 57.75 earn rate, earning 152,750 dollars and outperforming strong baselines including GPT-5. To further evaluate the effectiveness of SWE-Manager in real-world issue resolution, we design the P2A framework, which simulates a real-world workflow where multiple proposals are drafted, reviewed, and a golden proposal is selected for implementation ...