🤖 AI Summary
This work proposes the first refactoring approach grounded in developers’ actual editing workflows to address a critical limitation of current code-editing recommendation systems: despite their strong performance, they often disrupt developers’ cognitive flow by misaligning with natural reasoning processes during coding. To bridge this gap, the authors introduce a digital twin–based evaluation framework that integrates development log analysis, edit trajectory modeling, and cross-architecture model optimization. This framework enables mental-model alignment between recommendation systems and developers’ intuitive editing behaviors, supports unified evaluation and optimization of heterogeneous models, and significantly reduces disruptive interruptions. The approach establishes a quantifiable benchmark and a practical paradigm for next-generation code assistance tools that prioritize seamless integration into real-world development practices.
📝 Abstract
Large language models (LLMs) for code editing have achieved remarkable progress, yet recent empirical studies reveal a fundamental disconnect between technical accuracy and developer productivity. Despite their strong benchmark performance, developers complete tasks 19% slower when using AI assistance, with over 68.81% of recommendations disrupting their mental flow. This misalignment stems from the use of static commit snapshots that lack temporal information, causing models to optimize for end results rather than the incremental, context-sensitive steps that align with developers' natural reasoning process.
To bridge this gap, we present EditFlow, which benchmarks and optimizes subsequent code edit recommendation systems through the reconstruction of developer editing flows. EditFlow addresses three key challenges. First, collecting edit-order data that reflects developers' flow is inherently difficult: manual annotation introduces prohibitive overhead, while development logs capture only single trajectories instead of all plausible editing flows. Second, benchmarking recommendation performance against developers' ongoing editing flow requires a digital-twin-like simulation that can faithfully simulate the editing process. Third, existing heterogeneous systems vary drastically in scale and architecture, posing challenges for developing a unified optimization strategy that endows all models with mental-flow awareness regardless of design or capability.
......