PhysUniBench: An Undergraduate-Level Physics Reasoning Benchmark for Multimodal Models

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing evaluation methods inadequately assess multimodal large language models’ (MLLMs) true capabilities in undergraduate physics reasoning, particularly lacking integrated evaluation of conceptual understanding, mathematical derivation, and diagram interpretation. Method: We introduce PhysBench—the first multimodal reasoning benchmark tailored to undergraduate physics—comprising 3,304 diagram-augmented questions across eight subdomains. It features a five-level difficulty taxonomy, calibrated via expert review and model-in-the-loop filtering, and is constructed through a rigorous four-stage pipeline: human annotation, automated filtering, model-based closed-loop evaluation, and multimodal design. Contribution/Results: Experiments reveal that even the state-of-the-art MLLM, GPT-4o mini, achieves only 34.2% accuracy, exposing critical bottlenecks in multi-step physical reasoning and precise diagram comprehension. PhysBench establishes a novel paradigm and publicly available resource for scientifically grounded MLLM evaluation.

Technology Category

Application Category

📝 Abstract
Physics problem-solving is a challenging domain for large AI models, requiring integration of conceptual understanding, mathematical reasoning, and interpretation of physical diagrams. Current evaluation methodologies show notable limitations in capturing the breadth and complexity of undergraduate-level physics, underscoring the need for more rigorous assessments. To this end, we present PhysUniBench, a large-scale multimodal benchmark designed to evaluate and improve the reasoning capabilities of multimodal large language models (MLLMs) specifically on undergraduate-level physics problems. PhysUniBench consists of 3,304 physics questions spanning 8 major sub-disciplines of physics, each accompanied by one visual diagrams. The benchmark includes both open-ended and multiple-choice questions, systematically curated and difficulty-rated through an iterative model-in-the-loop process. The benchmark's construction involved a rigorous multi-stage process, including multiple roll-outs, expert-level evaluation, automated filtering of easily solved problems, and a nuanced difficulty grading system with five levels. Through extensive experiments, we observe that current state-of-the-art models encounter substantial challenges in physics reasoning. For example, GPT-4o mini achieves only about 34.2% accuracy in the proposed PhysUniBench. These results highlight that current MLLMs struggle with advanced physics reasoning, especially on multi-step problems and those requiring precise diagram interpretation. By providing a broad and rigorous assessment tool, PhysUniBench aims to drive progress in AI for Science, encouraging the development of models with stronger physical reasoning, problem-solving skills, and multimodal understanding. The benchmark and evaluation scripts are available at https://prismax-team.github.io/PhysUniBenchmark/.
Problem

Research questions and friction points this paper is trying to address.

Evaluates multimodal models on undergraduate physics reasoning.
Addresses limitations in current physics problem-solving assessments.
Measures AI's ability to interpret diagrams and solve complex problems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal benchmark for undergraduate physics reasoning
Model-in-the-loop difficulty rating system
Automated filtering of easily solved problems
🔎 Similar Papers
No similar papers found.