🤖 AI Summary
This study addresses the instability and unpredictability of large language models (LLMs) in automated code refactoring, particularly the lack of systematic evaluation regarding readability improvements. Leveraging GPT-5.1, the authors perform five iterative rounds of refactoring on 230 Java code snippets using three distinct prompting strategies. They conduct a fine-grained analysis of changes at the implementation, syntactic, and comment levels, while rigorously verifying functional correctness and robustness. The work reveals, for the first time, a consistent “refactor-then-stabilize” convergence pattern across iterations, suggesting that LLMs internalize an implicit notion of “optimal readable code.” This convergence behavior remains robust across diverse code variants and prompting strategies, providing empirical evidence for the reliability of LLM-assisted refactoring.
📝 Abstract
Large language models (LLMs) are increasingly used for automated code refactoring tasks. Although these models can quickly refactor code, the quality may exhibit inconsistencies and unpredictable behavior. In this article, we systematically study the capabilities of LLMs for code refactoring with a specific focus on improving code readability.
We conducted a large-scale experiment using GPT5.1 with 230 Java snippets, each systematically varied and refactored regarding code readability across five iterations under three different prompting strategies. We categorized fine-grained code changes during the refactoring into implementation, syntactic, and comment-level transformations. Subsequently, we investigated the functional correctness and tested the robustness of the results with novel snippets.
Our results reveal three main insights: First, iterative code refactoring exhibits an initial phase of restructuring followed by stabilization. This convergence tendency suggests that LLMs possess an internalized understanding of an "optimally readable" version of code. Second, convergence patterns are fairly robust across different code variants. Third, explicit prompting toward specific readability factors slightly influences the refactoring dynamics.
These insights provide an empirical foundation for assessing the reliability of LLM-assisted code refactoring, which opens pathways for future research, including comparative analyses across models and a systematic evaluation of additional software quality dimensions in LLM-refactored code.