Evaluating Large Language Models in Scientific Discovery

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing scientific benchmarks predominantly assess static factual knowledge, failing to capture core scientific reasoning capabilities—such as iterative hypothesis generation, experimental reasoning, and result interpretation—essential for real-world research. Method: We introduce SciDiscovery, the first scenario-driven benchmark for scientific discovery, spanning biology, chemistry, materials science, and physics. It is constructed from authentic research projects and features a modular, reproducible, discovery-oriented evaluation protocol. Its two-tiered assessment framework jointly measures question-level accuracy and project-level discovery capability, with tasks co-designed by domain experts. Contribution/Results: Experiments reveal that state-of-the-art large language models underperform significantly relative to human experts on SciDiscovery, exhibiting strong scenario dependency and diminishing returns with model scale. These findings indicate that current models fall far short of achieving general scientific “superintelligence.” SciDiscovery establishes a novel paradigm and open benchmark for rigorously evaluating AI’s scientific reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly applied to scientific research, yet prevailing science benchmarks probe decontextualized knowledge and overlook the iterative reasoning, hypothesis generation, and observation interpretation that drive scientific discovery. We introduce a scenario-grounded benchmark that evaluates LLMs across biology, chemistry, materials, and physics, where domain experts define research projects of genuine interest and decompose them into modular research scenarios from which vetted questions are sampled. The framework assesses models at two levels: (i) question-level accuracy on scenario-tied items and (ii) project-level performance, where models must propose testable hypotheses, design simulations or experiments, and interpret results. Applying this two-phase scientific discovery evaluation (SDE) framework to state-of-the-art LLMs reveals a consistent performance gap relative to general science benchmarks, diminishing return of scaling up model sizes and reasoning, and systematic weaknesses shared across top-tier models from different providers. Large performance variation in research scenarios leads to changing choices of the best performing model on scientific discovery projects evaluated, suggesting all current LLMs are distant to general scientific "superintelligence". Nevertheless, LLMs already demonstrate promise in a great variety of scientific discovery projects, including cases where constituent scenario scores are low, highlighting the role of guided exploration and serendipity in discovery. This SDE framework offers a reproducible benchmark for discovery-relevant evaluation of LLMs and charts practical paths to advance their development toward scientific discovery.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs in iterative scientific reasoning and hypothesis generation
Assesses performance gap between LLMs and genuine scientific discovery needs
Benchmarks LLMs across biology, chemistry, materials, and physics domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scenario-grounded benchmark for scientific discovery evaluation
Two-phase framework assessing question-level and project-level performance
Evaluates hypothesis generation, experiment design, and result interpretation
🔎 Similar Papers
No similar papers found.
Zhangde Song
Zhangde Song
Unknown affiliation
J
Jieyu Lu
Deep Principle, Hangzhou, China
Yuanqi Du
Yuanqi Du
PhD Student, Cornell University
Probabilistic ModelsGeometric Deep LearningAI for ScienceSampling/Optimization/Search
Botao Yu
Botao Yu
PhD student, Ohio State University
AI for ScienceNLPAI Music
T
Thomas M. Pruyn
Department of Chemical Engineering & Applied Chemistry, University of Toronto, Toronto, ON, Canada
Y
Yue Huang
Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA
Kehan Guo
Kehan Guo
University of Notre Dame
LLMMachine ReasoningGenerative ModelsXAIAI for Science
X
Xiuzhe Luo
QuEra Computing Inc., Boston, MA, USA
Y
Yuanhao Qu
Department of Pathology, Department of Genetics, Cancer Biology Program, Stanford University School of Medicine, Stanford, CA, USA
Yi Qu
Yi Qu
Harvard Law School, Cambridge, MA, USA
Yinkai Wang
Yinkai Wang
Tufts University
Ai4ScienceMLDeep LearningMoleculeBioinformatics
Haorui Wang
Haorui Wang
PhD student, Gatech
Machine LearningLarge Language ModelsDecision MakingUncertainty Quantification
J
Jeff Guo
Laboratory of Artificial Chemical Intelligence, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland
Jingru Gan
Jingru Gan
University of California, Los Angeles
AI for Materials ScienceVQA
P
Parshin Shojaee
Department of Computer Science, Virginia Tech, Arlington, V A, USA
D
Di Luo
Department of Physics, Tsinghua University, Beijing, China
A
Andres M Bran
Laboratory of Artificial Chemical Intelligence, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland
G
Gen Li
Department of Chemistry, Princeton University, Princeton, NJ, USA
Qiyuan Zhao
Qiyuan Zhao
Deep Principle, Hangzhou, China
S
Shao-Xiong Lennon Luo
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
Y
Yuxuan Zhang
Vector Institute for Artificial Intelligence, Toronto, ON, Canada
X
Xiang Zou
Department of Chemical Engineering & Applied Chemistry, University of Toronto, Toronto, ON, Canada
W
Wanru Zhao
Department of Computer Science and Technology, University of Cambridge, Cambridge, United Kingdom
Y
Yifan F. Zhang
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ, USA
W
Wucheng Zhang
Department of Physics, Princeton University, Princeton, NJ, USA