π€ AI Summary
Scientific reproducibility often fails due to missing dependencies, incorrect paths, or version conflicts, even when code and data are publicly available. This study addresses this challenge by constructing a controlled testing platform that injects real-world reproducibility failures from R-based social science research into isolated Docker environments. For the first time, it systematically compares automated repair approaches based on prompt engineering against autonomous agent workflows under controlled conditions. Experimental results show that structured prompting achieves repair success rates of only 31β79%, whereas agent-based workflows significantly improve performance, reaching 69β96% success. These findings demonstrate the superior diagnostic and repair capabilities of autonomous agents in complex failure scenarios, establishing a new paradigm for post-publication automated reproducibility.
π Abstract
Reproducing computational research is often assumed to be as simple as rerunning the original code with provided data. In practice, missing packages, fragile file paths, version conflicts, or incomplete logic frequently cause analyses to fail, even when materials are shared. This study investigates whether large language models and AI agents can automate the diagnosis and repair of such failures, making computational results easier to reproduce and verify. We evaluate this using a controlled reproducibility testbed built from five fully reproducible R-based social science studies. Realistic failures were injected, ranging from simple issues to complex missing logic, and two automated repair workflows were tested in clean Docker environments. The first workflow is prompt-based, repeatedly querying language models with structured prompts of varying context, while the second uses agent-based systems that inspect files, modify code, and rerun analyses autonomously. Across prompt-based runs, reproduction success ranged from 31-79 percent, with performance strongly influenced by prompt context and error complexity. Complex cases benefited most from additional context. Agent-based workflows performed substantially better, with success rates of 69-96 percent across all complexity levels. These results suggest that automated workflows, especially agent-based systems, can significantly reduce manual effort and improve reproduction success across diverse error types. Unlike prior benchmarks, our testbed isolates post-publication repair under controlled failure modes, allowing direct comparison of prompt-based and agent-based approaches.