🤖 AI Summary
Existing large language models (LLMs) lack a systematic, undergraduate-level benchmark for evaluating physical reasoning capabilities. Method: We introduce UGPhysics—the first dedicated multilingual, multidisciplinary, and multimodal benchmark for undergraduate physics reasoning—comprising 5,520 rigorously leakage-free questions across 13 courses and 7 answer formats. We further propose Model-Assisted Rule-based Judgment (MARJ), a novel rule-driven automated evaluation pipeline that transcends math-centric assessment by quantifying physical reasoning along four dimensions: modeling, principle application, derivation, and verification. Contribution/Results: Evaluation across 31 mainstream LLMs reveals that the state-of-the-art model (OpenAI-o1-mini) achieves only 49.8% accuracy—substantially below its mathematical performance—establishing a new, rigorous baseline for physical reasoning evaluation and highlighting a critical capability gap in current LLMs.
📝 Abstract
Large language models (LLMs) have demonstrated remarkable capabilities in solving complex reasoning tasks, particularly in mathematics. However, the domain of physics reasoning presents unique challenges that have received significantly less attention. Existing benchmarks often fall short in evaluating LLMs' abilities on the breadth and depth of undergraduate-level physics, underscoring the need for a comprehensive evaluation. To fill this gap, we introduce UGPhysics, a large-scale and comprehensive benchmark specifically designed to evaluate UnderGraduate-level Physics (UGPhysics) reasoning with LLMs. UGPhysics includes 5,520 undergraduate-level physics problems in both English and Chinese, covering 13 subjects with seven different answer types and four distinct physics reasoning skills, all rigorously screened for data leakage. Additionally, we develop a Model-Assistant Rule-based Judgment (MARJ) pipeline specifically tailored for assessing answer correctness of physics problems, ensuring accurate evaluation. Our evaluation of 31 leading LLMs shows that the highest overall accuracy, 49.8% (achieved by OpenAI-o1-mini), emphasizes the necessity for models with stronger physics reasoning skills, beyond math abilities. We hope UGPhysics, along with MARJ, will drive future advancements in AI for physics reasoning.