FIRE-Bench: Evaluating Agents on the Rediscovery of Scientific Insights

๐Ÿ“… 2026-02-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing evaluation methods struggle to effectively measure the true capabilities of autonomous agents in end-to-end scientific discovery, often relying on subjective LLM judgments or isolated metrics while lacking systematic assessment of verifiable scientific insights. This work proposes FIRE-Bench, a novel benchmark that evaluates agents by their ability to independently reproduce high-impact machine learning findings across a complete research cycleโ€”starting solely from a high-level research question and progressing through exploration, experimental design, code implementation, and evidence-supported conclusion. The framework emphasizes authenticity, verifiability, and diagnostic utility. Experimental results reveal that even the most advanced LLM-driven agents achieve less than 50 F1 on re-discovery success, exposing systemic deficiencies in experimental design, execution, and evidence-based reasoning, thereby highlighting the substantial challenges in achieving full-cycle scientific automation.

Technology Category

Application Category

๐Ÿ“ Abstract
Autonomous agents powered by large language models (LLMs) promise to accelerate scientific discovery end-to-end, but rigorously evaluating their capacity for verifiable discovery remains a central challenge. Existing benchmarks face a trade-off: they either heavily rely on LLM-as-judge evaluations of automatically generated research outputs or optimize convenient yet isolated performance metrics that provide coarse proxies for scientific insight. To address this gap, we introduce FIRE-Bench (Full-cycle Insight Rediscovery Evaluation), a benchmark that evaluates agents through the rediscovery of established findings from recent, high-impact machine learning research. Agents are given only a high-level research question extracted from a published, verified study and must autonomously explore ideas, design experiments, implement code, execute their plans, and derive conclusions supported by empirical evidence. We evaluate a range of state-of-the-art agents with frontier LLMs backbones like gpt-5 on FIRE-Bench. Our results show that full-cycle scientific research remains challenging for current agent systems: even the strongest agents achieve limited rediscovery success (<50 F1), exhibit high variance across runs, and display recurring failure modes in experimental design, execution, and evidence-based reasoning. FIRE-Bench provides a rigorous and diagnostic framework for measuring progress toward reliable agent-driven scientific discovery.
Problem

Research questions and friction points this paper is trying to address.

scientific discovery
autonomous agents
evaluation benchmark
LLM-based agents
insight rediscovery
Innovation

Methods, ideas, or system contributions that make the work stand out.

scientific discovery
autonomous agents
LLM-based evaluation
rediscovery benchmark
full-cycle research
๐Ÿ”Ž Similar Papers
No similar papers found.