🤖 AI Summary
Large language model (LLM) agents exhibit weak state reasoning capabilities in multi-file, multi-step code refactoring tasks, struggling to track cross-file dependencies and faithfully execute hierarchical natural language instructions. Method: We introduce RefactorBench—the first dedicated benchmark comprising 100 real-world, multi-file refactoring tasks—and propose a conditional augmentation architecture grounded in explicit state representation. This architecture integrates static code dependency analysis, execution trajectory modeling, and function-call-aware enhancement mechanisms. Contribution/Results: Our systematic evaluation reveals fundamental deficiencies in existing LLM agents’ state maintenance and evolution modeling. Baseline agents achieve only 22% task completion; integrating our state-aware framework improves performance to 65.9% (+43.9 percentage points), approaching human-level performance (87%). These results empirically validate that structured, explicit state modeling is critical for enhancing agent robustness and complex reasoning fidelity in code-centric multi-step tasks.
📝 Abstract
Recent advances in language model (LM) agents and function calling have enabled autonomous, feedback-driven systems to solve problems across various digital domains. To better understand the unique limitations of LM agents, we introduce RefactorBench, a benchmark consisting of 100 large handcrafted multi-file refactoring tasks in popular open-source repositories. Solving tasks within RefactorBench requires thorough exploration of dependencies across multiple files and strong adherence to relevant instructions. Every task is defined by 3 natural language instructions of varying specificity and is mutually exclusive, allowing for the creation of longer combined tasks on the same repository. Baselines on RefactorBench reveal that current LM agents struggle with simple compositional tasks, solving only 22% of tasks with base instructions, in contrast to a human developer with short time constraints solving 87%. Through trajectory analysis, we identify various unique failure modes of LM agents, and further explore the failure mode of tracking past actions. By adapting a baseline agent to condition on representations of state, we achieve a 43.9% improvement in solving RefactorBench tasks. We further extend our state-aware approach to encompass entire digital environments and outline potential directions for future research. RefactorBench aims to support the study of LM agents by providing a set of real-world, multi-hop tasks within the realm of code.