🤖 AI Summary
Prior literature inadequately characterizes developers’ real-world motivations for code refactoring in open-source projects, lacking scalable, semantically grounded analysis.
Method: We introduce an LLM-driven hybrid analytical framework, performing large-scale semantic parsing of commit messages—validated via human annotation and benchmarked against traditional software metrics.
Contribution/Results: Our approach uncovers 22% novel refactoring motivations absent from existing taxonomies. The LLM achieves 80% agreement with human judgments on motivation identification but aligns with established categories in only 47% of cases—demonstrating high efficacy for localized readability improvements yet revealing limitations in inferring architecture-level intent. These findings provide empirical grounding for refactoring practices and inform the design of intelligent, context-aware refactoring support tools.
📝 Abstract
Context. Code refactoring improves software quality without changing external behavior. Despite its advantages, its benefits are hindered by the considerable cost of time, resources, and continuous effort it demands. Aim. Understanding why developers refactor, and which metrics capture these motivations, may support wider and more effective use of refactoring in practice. Method. We performed a large-scale empirical study to analyze developers refactoring activity, leveraging Large Language Models (LLMs) to identify underlying motivations from version control data, comparing our findings with previous motivations reported in the literature. Results. LLMs matched human judgment in 80% of cases, but aligned with literature-based motivations in only 47%. They enriched 22% of motivations with more detailed rationale, often highlighting readability, clarity, and structural improvements. Most motivations were pragmatic, focused on simplification and maintainability. While metrics related to developer experience and code readability ranked highest, their correlation with motivation categories was weak. Conclusions. We conclude that LLMs effectively capture surface-level motivations but struggle with architectural reasoning. Their value lies in providing localized explanations, which, when combined with software metrics, can form hybrid approaches. Such integration offers a promising path toward prioritizing refactoring more systematically and balancing short-term improvements with long-term architectural goals.