🤖 AI Summary
Existing approaches to high-fidelity synthesis of codebases from scholarly documents (e.g., research papers) are constrained by the fundamental tension between LLM context limitations and information overload. This work formally models the task as an information channel optimization problem and introduces a four-stage information flow management framework: (1) blueprint distillation to compress core logic; (2) stateful code memory for structured indexing; (3) retrieval-augmented generation to inject domain-specific knowledge; and (4) closed-loop error correction for iterative refinement. The method significantly increases task-relevant signal density within bounded context windows, enabling end-to-end generation of high-quality, executable codebases. Evaluated on the PaperBench benchmark, it outperforms commercial coding agents—including Cursor and Claude Code—and achieves doctoral-expert-level performance on key reproducibility metrics, marking the first demonstration of fully automated, human-expert-quality scientific code reproduction.
📝 Abstract
Recent advances in large language models (LLMs) have given rise to powerful coding agents, making it possible for code assistants to evolve into code engineers. However, existing methods still face significant challenges in achieving high-fidelity document-to-codebase synthesis--such as scientific papers to code--primarily due to a fundamental conflict between information overload and the context bottlenecks of LLMs. In this work, we introduce DeepCode, a fully autonomous framework that fundamentally addresses this challenge through principled information-flow management. By treating repository synthesis as a channel optimization problem, DeepCode seamlessly orchestrates four information operations to maximize task-relevant signals under finite context budgets: source compression via blueprint distillation, structured indexing using stateful code memory, conditional knowledge injection via retrieval-augmented generation, and closed-loop error correction. Extensive evaluations on the PaperBench benchmark demonstrate that DeepCode achieves state-of-the-art performance, decisively outperforming leading commercial agents such as Cursor and Claude Code, and crucially, surpassing PhD-level human experts from top institutes on key reproduction metrics. By systematically transforming paper specifications into production-grade implementations comparable to human expert quality, this work establishes new foundations for autonomous scientific reproduction that can accelerate research evaluation and discovery.