🤖 AI Summary
Existing scientific benchmarks predominantly assess static factual knowledge, failing to capture core scientific reasoning capabilities—such as iterative hypothesis generation, experimental reasoning, and result interpretation—essential for real-world research.
Method: We introduce SciDiscovery, the first scenario-driven benchmark for scientific discovery, spanning biology, chemistry, materials science, and physics. It is constructed from authentic research projects and features a modular, reproducible, discovery-oriented evaluation protocol. Its two-tiered assessment framework jointly measures question-level accuracy and project-level discovery capability, with tasks co-designed by domain experts.
Contribution/Results: Experiments reveal that state-of-the-art large language models underperform significantly relative to human experts on SciDiscovery, exhibiting strong scenario dependency and diminishing returns with model scale. These findings indicate that current models fall far short of achieving general scientific “superintelligence.” SciDiscovery establishes a novel paradigm and open benchmark for rigorously evaluating AI’s scientific reasoning capabilities.
📝 Abstract
Large language models (LLMs) are increasingly applied to scientific research, yet prevailing science benchmarks probe decontextualized knowledge and overlook the iterative reasoning, hypothesis generation, and observation interpretation that drive scientific discovery. We introduce a scenario-grounded benchmark that evaluates LLMs across biology, chemistry, materials, and physics, where domain experts define research projects of genuine interest and decompose them into modular research scenarios from which vetted questions are sampled. The framework assesses models at two levels: (i) question-level accuracy on scenario-tied items and (ii) project-level performance, where models must propose testable hypotheses, design simulations or experiments, and interpret results. Applying this two-phase scientific discovery evaluation (SDE) framework to state-of-the-art LLMs reveals a consistent performance gap relative to general science benchmarks, diminishing return of scaling up model sizes and reasoning, and systematic weaknesses shared across top-tier models from different providers. Large performance variation in research scenarios leads to changing choices of the best performing model on scientific discovery projects evaluated, suggesting all current LLMs are distant to general scientific "superintelligence". Nevertheless, LLMs already demonstrate promise in a great variety of scientific discovery projects, including cases where constituent scenario scores are low, highlighting the role of guided exploration and serendipity in discovery. This SDE framework offers a reproducible benchmark for discovery-relevant evaluation of LLMs and charts practical paths to advance their development toward scientific discovery.