🤖 AI Summary
This work addresses the lack of effective evaluation benchmarks for AI research assistants in real-world, end-to-end scientific discovery scenarios, particularly in assessing their integrated capabilities in data analysis, interpretation, and novel insight generation. To this end, we propose HeurekaBench—a semi-automated benchmark framework grounded in real scientific papers and code repositories—that uniquely embeds authentic research workflows into its evaluation paradigm. By leveraging a multi-LLM collaborative pipeline to generate open-ended scientific questions, followed by human validation, HeurekaBench enables rigorous, quantifiable end-to-end assessment of AI co-research systems. We instantiate this framework in single-cell biology as sc-HeurekaBench and demonstrate that incorporating a critique module reduces invalid responses from open-source LLM agents by 22%, substantially narrowing the performance gap with closed-source models.
📝 Abstract
LLM-based reasoning models have enabled the development of agentic systems that act as co-scientists, assisting in multi-step scientific analysis. However, evaluating these systems is challenging, as it requires realistic, end-to-end research scenarios that integrate data analysis, interpretation, and the generation of new insights from the experimental data. To address this limitation, we introduce HeurekaBench, a framework to create benchmarks with exploratory, open-ended research questions for experimental datasets. Each such question is grounded in a scientific study and its corresponding code repository, and is created using a semi-automated pipeline that leverages multiple LLMs to extract insights and generate candidate workflows, which are then verified against reported findings. We instantiate the framework in single-cell biology to obtain sc-HeurekaBench benchmark and use it to compare state-of-the-art single-cell agents. We further showcase the benefits of our benchmark for quantitatively analyzing current design choices in agentic systems. We find that the addition of a critic module can improve ill-formed responses for open-source LLM-based agents by up to 22% and close the gap with their closed-source counterparts. Overall, HeurekaBench sets a path toward rigorous, end-to-end evaluation of scientific agents, grounding benchmark construction in real scientific workflows.