🤖 AI Summary
This work addresses the limitation of large language models (LLMs), which, trained solely on static code, lack the deep, long-horizon reasoning capabilities essential for real-world software development. To bridge this gap, the authors propose a novel “understanding through refactoring” paradigm that reconceptualizes the development process as a refactorable multi-agent trajectory. By inversely synthesizing high-quality reasoning trajectories—encompassing planning, debugging, and iterative refinement—from static code, and integrating dependency graph–guided trajectory generation with search-based chain-of-thought optimization, the method enables continuous pretraining. Experiments on Llama-3-8B demonstrate significant improvements in long-context comprehension, programming proficiency, and agent-like behavioral capabilities, effectively enhancing the model’s capacity for deep reasoning.
📝 Abstract
While Large Language Models (LLMs) have achieved remarkable success in code generation, they often struggle with the deep, long-horizon reasoning required for complex software engineering. We attribute this limitation to the nature of standard pre-training data: static software repositories represent only the terminal state of an intricate intellectual process, abstracting away the intermediate planning, debugging, and iterative refinement. To bridge this gap, we propose a novel paradigm: understanding via reconstruction. We hypothesize that reverse-engineering the latent agentic trajectories -- the planning, reasoning, and debugging steps -- behind static repositories provides a far richer supervision signal than raw code alone. To operationalize this, we introduce a framework that synthesizes these trajectories using a multi-agent simulation. This process is grounded in the structural realities of the source repositories (e.g., dependency graphs and file hierarchies) to ensure fidelity. Furthermore, to guarantee the logical rigor of the synthetic data, we employ a search-based optimization technique that iteratively refines the Chain-of-Thought (CoT) reasoning to maximize the likelihood of the ground-truth code. Empirical results demonstrate that continuous pre-training on these reconstructed trajectories significantly enhances Llama-3-8B's performance across diverse benchmarks, including long-context understanding, coding proficiency, and agentic capabilities.