CodeTaste: Can LLMs Generate Human-Level Code Refactorings?

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that code generated by large language models (LLMs) often suffers from high complexity, redundancy, and architectural debt, and struggles to autonomously identify and perform human-level refactoring. To this end, we introduce CodeTaste, a benchmark that systematically evaluates LLMs’ ability to detect and reproduce real-world refactorings in multi-file settings. Our approach combines large-scale mining of open-source changes, data-flow analysis, and static pattern detection, employing a two-stage “propose-and-implement” strategy. Refactoring quality is validated through test suites and behavioral equivalence checks. Experiments show that while current LLMs can effectively refactor under explicit instructions, they still exhibit a significant gap in autonomously understanding human refactoring intent. Performance is notably enhanced by the staged strategy and by prioritizing proposals aligned with human practices.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) coding agents can generate working code, but their solutions often accumulate complexity, duplication, and architectural debt. Human developers address such issues through refactoring: behavior-preserving program transformations that improve structure and maintainability. In this paper, we investigate if LLM agents (i) can execute refactorings reliably and (ii) identify the refactorings that human developers actually chose in real codebases. We present CodeTaste, a benchmark of refactoring tasks mined from large-scale multi-file changes in open-source repositories. To score solutions, we combine repository test suites with custom static checks that verify removal of undesired patterns and introduction of desired patterns using dataflow reasoning. Our experimental results indicate a clear gap across frontier models: agents perform well when refactorings are specified in detail, but often fail to discover the human refactoring choices when only presented with a focus area for improvement. A propose-then-implement decomposition improves alignment, and selecting the best-aligned proposal before implementation can yield further gains. CodeTaste provides an evaluation target and a potential preference signal for aligning coding agents with human refactoring decisions in realistic codebases.
Problem

Research questions and friction points this paper is trying to address.

code refactoring
large language models
software maintainability
behavior-preserving transformation
code quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

code refactoring
large language models
benchmark
dataflow analysis
behavior-preserving transformation
🔎 Similar Papers
No similar papers found.