🤖 AI Summary
To address inefficient development and accumulating technical debt caused by poor program comprehension, this paper proposes Augmented Code Engineering (ACE), the first framework to tightly integrate large language model (LLM)-driven automated code refactoring with a data-driven dual-verification mechanism—combining static analysis, dynamic execution, and quality assessment. ACE generates semantically preserving, quality-enhancing refactoring suggestions and ensures safety through joint validation of functional correctness and maintainability. Experimental results demonstrate that ACE effectively identifies and resolves long-standing technical debt, reducing developers’ code understanding time by 37% on average. A user study further confirms significant improvements in code maintainability and development efficiency. The core contribution is the establishment of the first LLM-augmented code engineering paradigm that jointly optimizes generative capability and trustworthiness guarantees.
📝 Abstract
The remarkable advances in AI and Large Language Models (LLMs) have enabled machines to write code, accelerating the growth of software systems. However, the bottleneck in software development is not writing code but understanding it; program understanding is the dominant activity, consuming approximately 70% of developers' time. This implies that improving existing code to make it easier to understand has a high payoff and - in the age of AI-assisted coding - is an essential activity to ensure that a limited pool of developers can keep up with ever-growing codebases. This paper introduces Augmented Code Engineering (ACE), a tool that automates code improvements using validated LLM output. Developed through a data-driven approach, ACE provides reliable refactoring suggestions by considering both objective code quality improvements and program correctness. Early feedback from users suggests that AI-enabled refactoring helps mitigate code-level technical debt that otherwise rarely gets acted upon.