🤖 AI Summary
To address the time-consuming, expert-dependent conversion of academic papers—containing multimodal content (text, mathematical formulas, figures, and tables)—into executable code repositories, this paper introduces the novel “Paper-to-Code” (P2C) task. We propose a four-stage multi-agent framework: blueprint extraction, cross-modal parsing, hierarchical task decomposition, and feedback-driven debugging. The framework integrates OCR, vision-language understanding, structured code generation, and dynamic unit testing, all orchestrated end-to-end via large language models. Evaluated on a benchmark of eight real-world machine learning papers, our method achieves 100% success in generating fully functional, runnable code repositories—significantly outperforming OpenAI-o1 and DeepSeek-R1, which succeed on only 1/8 papers each. This work effectively alleviates two critical bottlenecks in AI research: low reproducibility and inefficient code implementation.
📝 Abstract
Machine Learning (ML) research is spread through academic papers featuring rich multimodal content, including text, diagrams, and tabular results. However, translating these multimodal elements into executable code remains a challenging and time-consuming process that requires substantial ML expertise. We introduce ``Paper-to-Code'' (P2C), a novel task that transforms the multimodal content of scientific publications into fully executable code repositories, which extends beyond the existing formulation of code generation that merely converts textual descriptions into isolated code snippets. To automate the P2C process, we propose AutoP2C, a multi-agent framework based on large language models that processes both textual and visual content from research papers to generate complete code repositories. Specifically, AutoP2C contains four stages: (1) repository blueprint extraction from established codebases, (2) multimodal content parsing that integrates information from text, equations, and figures, (3) hierarchical task decomposition for structured code generation, and (4) iterative feedback-driven debugging to ensure functionality and performance. Evaluation on a benchmark of eight research papers demonstrates the effectiveness of AutoP2C, which can successfully generate executable code repositories for all eight papers, while OpenAI-o1 or DeepSeek-R1 can only produce runnable code for one paper. The code is available at https://github.com/shoushouyu/Automated-Paper-to-Code.