🤖 AI Summary
Current AI scientist systems—automated end-to-end research pipelines—lack rigorous internal workflow scrutiny, threatening scientific integrity, reliability, and credibility. This work systematically identifies, for the first time, four latent failure modes: inappropriate benchmark selection, data leakage, metric misuse, and post-hoc selection bias. Through controlled experiments and empirical analysis of two widely adopted open-source AI scientist systems, we demonstrate that these flaws are pervasive across the full research lifecycle and often undetectable from final publications alone. To address this, we propose a transparency framework grounded in comprehensive execution logs and version-controlled code commits, which significantly improves defect detection efficiency. We further advocate that academic journals mandate submission of such artifacts as a prerequisite for publication. This work establishes a methodological foundation and practical pathway for ensuring reproducibility and quality assurance in AI-driven scientific discovery.
📝 Abstract
AI scientist systems, capable of autonomously executing the full research workflow from hypothesis generation and experimentation to paper writing, hold significant potential for accelerating scientific discovery. However, the internal workflow of these systems have not been closely examined. This lack of scrutiny poses a risk of introducing flaws that could undermine the integrity, reliability, and trustworthiness of their research outputs. In this paper, we identify four potential failure modes in contemporary AI scientist systems: inappropriate benchmark selection, data leakage, metric misuse, and post-hoc selection bias. To examine these risks, we design controlled experiments that isolate each failure mode while addressing challenges unique to evaluating AI scientist systems. Our assessment of two prominent open-source AI scientist systems reveals the presence of several failures, across a spectrum of severity, which can be easily overlooked in practice. Finally, we demonstrate that access to trace logs and code from the full automated workflow enables far more effective detection of such failures than examining the final paper alone. We thus recommend journals and conferences evaluating AI-generated research to mandate submission of these artifacts alongside the paper to ensure transparency, accountability, and reproducibility.