🤖 AI Summary
Machine learning research is frequently hindered by the lack of reproducible code accompanying publications. To address this, we propose PaperCoder—the first multi-agent LLM framework tailored for scientific papers—that automatically transforms research papers into fully functional, well-structured, and dependency-aware code repositories through coordinated planning, analysis, and generation phases. Our contributions include: (1) a novel multi-stage agent architecture explicitly designed for paper understanding; (2) integrated support for system architecture diagram generation, automated configuration file synthesis, and author-level fidelity verification; and (3) dependency-aware modular code synthesis coupled with hybrid evaluation—combining automated model-based assessment and manual validation by original authors. On the PaperBench benchmark, PaperCoder significantly outperforms strong baselines; the generated repositories achieve high fidelity scores from original authors and enable end-to-end reproducible repository construction.
📝 Abstract
Despite the rapid growth of machine learning research, corresponding code implementations are often unavailable, making it slow and labor-intensive for researchers to reproduce results and build upon prior work. In the meantime, recent Large Language Models (LLMs) excel at understanding scientific documents and generating high-quality code. Inspired by this, we introduce PaperCoder, a multi-agent LLM framework that transforms machine learning papers into functional code repositories. PaperCoder operates in three stages: planning, where it constructs a high-level roadmap, designs the system architecture with diagrams, identifies file dependencies, and generates configuration files; analysis, which focuses on interpreting implementation-specific details; and generation, where modular, dependency-aware code is produced. Moreover, each phase is instantiated through a set of specialized agents designed to collaborate effectively across the pipeline. We then evaluate PaperCoder on generating code implementations from machine learning papers based on both model-based and human evaluations, specifically from the original paper authors, with author-released repositories as ground truth if available. Our results demonstrate the effectiveness of PaperCoder in creating high-quality, faithful implementations. Furthermore, it consistently shows strengths in the recently released PaperBench benchmark, surpassing strong baselines by substantial margins.