SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited reasoning capabilities of large language models (LLMs) in software engineering. We propose the first large-scale application of reinforcement learning (RL) to open-source software evolution data—including code snapshots, commits, and issue/PR records—to enable LLMs to autonomously reconstruct developer reasoning processes. A lightweight rule-based reward mechanism—e.g., solution-to-ground-truth similarity—is introduced, enabling end-to-end RL fine-tuning solely from evolution data, thereby avoiding generalization degradation inherent in supervised fine-tuning. We instantiate our framework on Llama-3-70B, yielding Llama3-SWE-RL-70B. On SWE-bench Verified, it achieves a 41.0% resolution rate—the highest among sub-100B-parameter models and competitive with GPT-4o. Moreover, it significantly outperforms baselines across five diverse domains—including function-level code generation and mathematical reasoning—demonstrating, for the first time, evolution-data-driven cross-domain generalization in reasoning.

Technology Category

Application Category

📝 Abstract
The recent DeepSeek-R1 release has demonstrated the immense potential of reinforcement learning (RL) in enhancing the general reasoning capabilities of large language models (LLMs). While DeepSeek-R1 and other follow-up work primarily focus on applying RL to competitive coding and math problems, this paper introduces SWE-RL, the first approach to scale RL-based LLM reasoning for real-world software engineering. Leveraging a lightweight rule-based reward (e.g., the similarity score between ground-truth and LLM-generated solutions), SWE-RL enables LLMs to autonomously recover a developer's reasoning processes and solutions by learning from extensive open-source software evolution data -- the record of a software's entire lifecycle, including its code snapshots, code changes, and events such as issues and pull requests. Trained on top of Llama 3, our resulting reasoning model, Llama3-SWE-RL-70B, achieves a 41.0% solve rate on SWE-bench Verified -- a human-verified collection of real-world GitHub issues. To our knowledge, this is the best performance reported for medium-sized (<100B) LLMs to date, even comparable to leading proprietary LLMs like GPT-4o. Surprisingly, despite performing RL solely on software evolution data, Llama3-SWE-RL has even emerged with generalized reasoning skills. For example, it shows improved results on five out-of-domain tasks, namely, function coding, library use, code reasoning, mathematics, and general language understanding, whereas a supervised-finetuning baseline even leads to performance degradation on average. Overall, SWE-RL opens up a new direction to improve the reasoning capabilities of LLMs through reinforcement learning on massive software engineering data.
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM reasoning via RL on software evolution.
Autonomously recovers developer reasoning from open-source data.
Improves LLM performance on diverse out-of-domain tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning for LLMs
Software Evolution Data Utilization
Generalized Reasoning Skills Enhancement
Y
Yuxiang Wei
FAIR at Meta, University of Illinois Urbana-Champaign
Olivier Duchenne
Olivier Duchenne
Meta AI
Machine Learning / Computer Vision
Jade Copet
Jade Copet
Facebook AI Research
Machine LearningArtificial IntelligenceSpeech and Audio ProcessingNatural Language Processing
Q
Quentin Carbonneaux
FAIR at Meta
L
Lingming Zhang
University of Illinois Urbana-Champaign
Daniel Fried
Daniel Fried
Carnegie Mellon University
Natural Language ProcessingMachine Learning
G
Gabriele Synnaeve
FAIR at Meta
R
Rishabh Singh
GenAI at Meta
Sida I. Wang
Sida I. Wang
Facebook AI
Machine learningNLP