AutoP2C: An LLM-Based Agent Framework for Code Repository Generation from Multimodal Content in Academic Papers

📅 2025-04-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the time-consuming, expert-dependent conversion of academic papers—containing multimodal content (text, mathematical formulas, figures, and tables)—into executable code repositories, this paper introduces the novel “Paper-to-Code” (P2C) task. We propose a four-stage multi-agent framework: blueprint extraction, cross-modal parsing, hierarchical task decomposition, and feedback-driven debugging. The framework integrates OCR, vision-language understanding, structured code generation, and dynamic unit testing, all orchestrated end-to-end via large language models. Evaluated on a benchmark of eight real-world machine learning papers, our method achieves 100% success in generating fully functional, runnable code repositories—significantly outperforming OpenAI-o1 and DeepSeek-R1, which succeed on only 1/8 papers each. This work effectively alleviates two critical bottlenecks in AI research: low reproducibility and inefficient code implementation.

Technology Category

Application Category

📝 Abstract
Machine Learning (ML) research is spread through academic papers featuring rich multimodal content, including text, diagrams, and tabular results. However, translating these multimodal elements into executable code remains a challenging and time-consuming process that requires substantial ML expertise. We introduce ``Paper-to-Code'' (P2C), a novel task that transforms the multimodal content of scientific publications into fully executable code repositories, which extends beyond the existing formulation of code generation that merely converts textual descriptions into isolated code snippets. To automate the P2C process, we propose AutoP2C, a multi-agent framework based on large language models that processes both textual and visual content from research papers to generate complete code repositories. Specifically, AutoP2C contains four stages: (1) repository blueprint extraction from established codebases, (2) multimodal content parsing that integrates information from text, equations, and figures, (3) hierarchical task decomposition for structured code generation, and (4) iterative feedback-driven debugging to ensure functionality and performance. Evaluation on a benchmark of eight research papers demonstrates the effectiveness of AutoP2C, which can successfully generate executable code repositories for all eight papers, while OpenAI-o1 or DeepSeek-R1 can only produce runnable code for one paper. The code is available at https://github.com/shoushouyu/Automated-Paper-to-Code.
Problem

Research questions and friction points this paper is trying to address.

Translating multimodal academic content into executable code
Automating code repository generation from research papers
Overcoming manual code conversion challenges in ML research
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based multi-agent framework for code generation
Multimodal content parsing from text and figures
Iterative feedback-driven debugging for functionality
🔎 Similar Papers
No similar papers found.
Z
Zijie Lin
University of Science and Technology of China, Hefei, China
Yiqing Shen
Yiqing Shen
Johns Hopkins
Q
Qilin Cai
University of Science and Technology of China, Hefei, China
H
He Sun
University of Science and Technology of China, Hefei, China
J
Jinrui Zhou
University of Science and Technology of China, Hefei, China
Mingjun Xiao
Mingjun Xiao
University of Science and Technology of China
Mobile ComputingCrowdsensingMobile Social NetworkVechular Network