🤖 AI Summary
Existing LLM evaluation benchmarks for scientific knowledge lack comprehensive coverage across both breadth and depth of scientific reasoning. Method: We propose the first holistic evaluation framework aligned with the five-tier cognitive taxonomy—“broad learning, critical questioning, careful thinking, clear discernment, and committed practice”—and construct a large-scale, multi-granularity annotated dataset comprising 70,000 interdisciplinary questions spanning biology, chemistry, physics, and materials science. We further release a unified evaluation protocol and conduct systematic zero-shot and few-shot evaluations across 26 state-of-the-art open- and closed-source models. Contribution/Results: Our analysis reveals persistent bottlenecks in advanced scientific reasoning and practical application capabilities—even among top-performing models. This work establishes the first hierarchical, measurable characterization of scientific cognition, providing a reproducible, extensible benchmark to guide the development of science-oriented foundation models.
📝 Abstract
Large language models (LLMs) have gained increasing prominence in scientific research, but there is a lack of comprehensive benchmarks to fully evaluate their proficiency in understanding and mastering scientific knowledge. To address this need, we introduce the SciKnowEval benchmark, a novel framework that systematically evaluates LLMs across five progressive levels of scientific knowledge: studying extensively, inquiring earnestly, thinking profoundly, discerning clearly, and practicing assiduously. These levels aim to assess the breadth and depth of scientific knowledge in LLMs, including memory, comprehension, reasoning, discernment, and application. Specifically, we first construct a large-scale evaluation dataset encompassing 70K multi-level scientific problems and solutions in the domains of biology, chemistry, physics, and materials science. By leveraging this dataset, we benchmark 26 advanced open-source and proprietary LLMs using zero-shot and few-shot prompting strategies. The results reveal that despite the state-of-the-art performance of proprietary LLMs, there is still significant room for improvement, particularly in addressing scientific reasoning and applications. We anticipate that SciKnowEval will establish a standard for benchmarking LLMs in science research and promote the development of stronger scientific LLMs. The dataset and code are publicly available at https://scimind.ai/sciknoweval .