Scientists' First Exam: Probing Cognitive Abilities of MLLM via Perception, Understanding, and Reasoning

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing scientific benchmarks predominantly emphasize knowledge comprehension, neglecting the evaluation of multimodal large language models’ (MLLMs) perceptual and reasoning capabilities. To address this gap, we propose SFE—the first multimodal benchmark explicitly designed to assess scientific cognitive abilities—introducing a three-tiered, progressively demanding evaluation framework: signal perception → attribute understanding → comparative reasoning. SFE spans five high-value disciplines—physics, chemistry, biology, earth science, and astronomy—comprising 66 multimodal tasks and 830 expert-validated visual question-answering (VQA) samples. It integrates cross-disciplinary scientific data modeling, domain-expert collaborative annotation, and a layered cognitive assessment protocol, systematically bridging the long-standing gap in scientific perception and reasoning evaluation. Experimental results reveal that state-of-the-art models—GPT-4o and InternVL-3—achieve only 34.08% and 26.52% accuracy on SFE, respectively, underscoring critical limitations in MLLMs’ scientific cognition.

Technology Category

Application Category

📝 Abstract
Scientific discoveries increasingly rely on complex multimodal reasoning based on information-intensive scientific data and domain-specific expertise. Empowered by expert-level scientific benchmarks, scientific Multimodal Large Language Models (MLLMs) hold the potential to significantly enhance this discovery process in realistic workflows. However, current scientific benchmarks mostly focus on evaluating the knowledge understanding capabilities of MLLMs, leading to an inadequate assessment of their perception and reasoning abilities. To address this gap, we present the Scientists' First Exam (SFE) benchmark, designed to evaluate the scientific cognitive capacities of MLLMs through three interconnected levels: scientific signal perception, scientific attribute understanding, scientific comparative reasoning. Specifically, SFE comprises 830 expert-verified VQA pairs across three question types, spanning 66 multimodal tasks across five high-value disciplines. Extensive experiments reveal that current state-of-the-art GPT-o3 and InternVL-3 achieve only 34.08% and 26.52% on SFE, highlighting significant room for MLLMs to improve in scientific realms. We hope the insights obtained in SFE will facilitate further developments in AI-enhanced scientific discoveries.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs' perception, understanding, reasoning in science
Addressing gaps in current scientific benchmarks for MLLMs
Assessing AI's cognitive abilities via multidisciplinary VQA tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

SFE benchmark evaluates MLLM cognitive abilities
Multimodal tasks span five high-value disciplines
Expert-verified VQA pairs enhance scientific reasoning
🔎 Similar Papers
No similar papers found.
Y
Yuhao Zhou
Shanghai Artificial Intelligence Laboratory
Y
Yiheng Wang
Shanghai Artificial Intelligence Laboratory
X
Xuming He
Shanghai Artificial Intelligence Laboratory
R
Ruoyao Xiao
Shanghai Artificial Intelligence Laboratory
Z
Zhiwei Li
Shanghai Artificial Intelligence Laboratory
Q
Qiantai Feng
Shanghai Artificial Intelligence Laboratory
Z
Zijie Guo
Shanghai Artificial Intelligence Laboratory
Y
Yuejin Yang
Shanghai Artificial Intelligence Laboratory
H
Hao Wu
Shanghai Artificial Intelligence Laboratory
Wenxuan Huang
Wenxuan Huang
CUHK & ECNU
Artificial General IntelligenceMLLMLLMAIGCModel Acceleration
Jiaqi Wei
Jiaqi Wei
PhD student, Zhejiang University
NLPLLMAI for Science
D
Dan Si
Shanghai Artificial Intelligence Laboratory
X
Xiuqi Yao
Shanghai Artificial Intelligence Laboratory
J
Jia Bu
Shanghai Artificial Intelligence Laboratory
H
Haiwen Huang
Shanghai Artificial Intelligence Laboratory
Tianfan Fu
Tianfan Fu
Nanjing University
AI for DrugAI for ScienceLarge Language Model
S
Shixiang Tang
Shanghai Artificial Intelligence Laboratory
B
Ben Fei
Shanghai Artificial Intelligence Laboratory
Dongzhan Zhou
Dongzhan Zhou
Researcher at Shanghai AI Lab
AI4Sciencecomputer visiondeep learning
Fenghua Ling
Fenghua Ling
Shanghai Artificial Intelligence Laboratory
AI4ClimateClimate predictionWeather prediction
Y
Yan Lu
Shanghai Artificial Intelligence Laboratory
S
Siqi Sun
Shanghai Artificial Intelligence Laboratory
Chenhui Li
Chenhui Li
Baidu
AINLPCV
Guanjie Zheng
Guanjie Zheng
Shanghai Jiao Tong University
Data miningmachine learning
Jiancheng Lv
Jiancheng Lv
University of Science and Technology of China
Operations ManagementMarketing
W
Wenlong Zhang
Shanghai Artificial Intelligence Laboratory
Lei Bai
Lei Bai
Shanghai AI Laboratory
Foundation ModelScience IntelligenceMulti-Agent SystemAutonomous Discovery