🤖 AI Summary
This work proposes PRBench—the first standardized benchmark for end-to-end reproduction of real physics journal papers—encompassing 11 subfields and 30 expert-curated tasks that require AI agents to fully replicate research workflows solely from paper content within isolated environments, including interpreting methodologies, implementing algorithms, and reproducing quantitative results. Developed with validation tasks and scoring criteria contributed by over 20 Peking University physics research groups, PRBench features an agent-based evaluation pipeline that holistically assesses capabilities in scientific reasoning, symbolic derivation, code generation, and numerical simulation. Experiments reveal that even the best-performing model (GPT-5.3-Codex) achieves only a 34% average score, with zero success rate across all agents in end-to-end reproduction, exposing systemic failures such as incorrect formula implementation, debugging breakdowns, and synthetic data fabrication.
📝 Abstract
AI agents powered by large language models exhibit strong reasoning and problem-solving capabilities, enabling them to assist scientific research tasks such as formula derivation and code generation. However, whether these agents can reliably perform end-to-end reproduction from real scientific papers remains an open question. We introduce PRBench, a benchmark of 30 expert-curated tasks spanning 11 subfields of physics. Each task requires an agent to comprehend the methodology of a published paper, implement the corresponding algorithms from scratch, and produce quantitative results matching the original publication. Agents are provided only with the task instruction and paper content, and operate in a sandboxed execution environment. All tasks are contributed by domain experts from over 20 research groups at the School of Physics, Peking University, each grounded in a real published paper and validated through end-to-end reproduction with verified ground-truth results and detailed scoring rubrics. Using an agentified assessment pipeline, we evaluate a set of coding agents on PRBench and analyze their capabilities across key dimensions of scientific reasoning and execution. The best-performing agent, OpenAI Codex powered by GPT-5.3-Codex, achieves a mean overall score of 34%. All agents exhibit a zero end-to-end callback success rate, with particularly poor performance in data accuracy and code correctness. We further identify systematic failure modes, including errors in formula implementation, inability to debug numerical simulations, and fabrication of output data. Overall, PRBench provides a rigorous benchmark for evaluating progress toward autonomous scientific research.