SWE-Refactor: A Repository-Level Benchmark for Real-World LLM-Based Code Refactoring

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing code refactoring benchmarks, which suffer from insufficient scenario coverage, low change purity, and lack of repository-level context, thereby hindering effective evaluation of large language models (LLMs) on semantics-preserving refactoring tasks. To this end, we construct a new LLM refactoring benchmark derived from real-world Java projects, comprising 1,099 behavior-preserving refactoring instances mined from 18 open-source repositories. Each instance undergoes rigorous validation through compilation, testing, and automated verification, and exhibits both atomic and composite characteristics while preserving full repository context. Systematic evaluation of nine prominent LLMs on this benchmark reveals significant performance gaps in handling composite refactorings—e.g., Codex achieves only a 39.4% success rate—highlighting critical challenges in applying LLMs to complex structural code optimization.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have recently attracted wide interest for tackling software engineering tasks. In contrast to code generation, refactoring demands precise, semantics-preserving edits that improve program structure, which also makes automated evaluation challenging. However, existing refactoring benchmarks commonly suffer from three shortcomings: limited coverage of refactoring scenarios, the inclusion of instances that mix refactoring with unrelated changes, and insufficient repository-level context for realistic assessment. To mitigate these issues, we introduce SWE-Refactor, a new benchmark for LLM-based code refactoring. SWE-Refactor comprises 1,099 developer-written, behavior-preserving refactorings mined from 18 Java projects, including 922 atomic and 177 compound instances. Each instance is validated via compilation, test execution, and automated refactoring detection tools to ensure correctness. We evaluate nine widely used LLMs on SWE-Refactor, covering models such as GPT-4o-mini, DeepSeek-V3, and CodeLLaMa, to provide representative reference results. Our results show that complex and compound refactorings remain the primary source of failures; notably, an OpenAI Codex agent achieves only 39.4% success on compound instances. We release SWE-Refactor and all evaluation results to facilitate future research on LLM-based code refactoring.
Problem

Research questions and friction points this paper is trying to address.

code refactoring
large language models
benchmark
repository-level context
behavior-preserving edits
Innovation

Methods, ideas, or system contributions that make the work stand out.

code refactoring
large language models
repository-level benchmark
behavior-preserving transformation
empirical evaluation
🔎 Similar Papers
No similar papers found.