🤖 AI Summary
Existing large language models (LLMs) lack systematic evaluation of scientific understanding across multidisciplinary domains. Method: We propose SciCUEval—the first fine-grained, multidimensional benchmark covering ten scientific subfields (e.g., biology, chemistry, physics), integrating multimodal scientific data (tables, knowledge graphs, and text) to assess four core competencies: relevant information identification, gap detection, multi-source integration, and contextual reasoning. Our approach introduces a novel structured evaluation framework for scientific context understanding, combining domain-expert collaborative annotation, knowledge graph alignment, and scientific semantic enhancement. Contribution/Results: Empirical evaluation across state-of-the-art LLMs reveals significant cross-disciplinary performance disparities and shared weaknesses—particularly in rigorous scientific reasoning and data integration. SciCUEval provides a reproducible, diagnostic benchmark to guide the development and refinement of scientific LLMs.
📝 Abstract
Large Language Models (LLMs) have shown impressive capabilities in contextual understanding and reasoning. However, evaluating their performance across diverse scientific domains remains underexplored, as existing benchmarks primarily focus on general domains and fail to capture the intricate complexity of scientific data. To bridge this gap, we construct SciCUEval, a comprehensive benchmark dataset tailored to assess the scientific context understanding capability of LLMs. It comprises ten domain-specific sub-datasets spanning biology, chemistry, physics, biomedicine, and materials science, integrating diverse data modalities including structured tables, knowledge graphs, and unstructured texts. SciCUEval systematically evaluates four core competencies: Relevant information identification, Information-absence detection, Multi-source information integration, and Context-aware inference, through a variety of question formats. We conduct extensive evaluations of state-of-the-art LLMs on SciCUEval, providing a fine-grained analysis of their strengths and limitations in scientific context understanding, and offering valuable insights for the future development of scientific-domain LLMs.