๐ค AI Summary
Large language models (LLMs) lack systematic evaluation and effective enhancement of physics reasoning capabilities. Method: This paper introduces PHYSICS, a benchmark comprising 16,568 high-quality problems spanning mechanics, electromagnetism, thermodynamics, optics, and modern physicsโranging from high school to graduate-level difficulty. We propose Rule+Model, the first rule-augmented evaluation framework that synergistically integrates symbolic rule engines with LLMs to address physics-specific challenges including unit conversion, algebraic simplification, and numerical precision. High-fidelity annotations and reproducible evaluation are achieved via LLM-assisted reasoning path generation and rigorous data curation. Contribution/Results: Experiments reveal substantial deficiencies in current open- and closed-source LLMs on physics reasoning. This work establishes the first standardized infrastructure and methodology for evaluating and training physics-capable LLMs.
๐ Abstract
Large Language Models (LLMs) have achieved remarkable progress on advanced reasoning tasks such as mathematics and coding competitions. Meanwhile, physics, despite being both reasoning-intensive and essential to real-world understanding, received limited academic and industrial attention. This paper introduces PHYSICS, a dataset containing 16,568 high-quality physics problems spanning subjects and difficulty levels, to facilitate this issue. Specifically, PHYSICS is curated with exercises from over 100 textbooks through a carefully designed pipeline for quality control. It covers five major physics domains: Mechanics, Electromagnetism, Thermodynamics, Optics, and Modern Physics. It also spans a wide range of difficulty levels, from high school to graduate-level physics courses. To utilize the data for improving and evaluating the model's physical reasoning capabilities, we split the dataset into training and test sets, and provide reasoning paths generated by powerful reasoning models for the training data to facilitate model training. In addition, for the evaluation part, we find that existing evaluation frameworks exhibit biases in aspects such as units, simplification, and precision in physics domain. To balance efficiency and accuracy, we introduce a Rule+Model evaluation framework tailored to physics problems. Our evaluations on current state-of-the-art open-source and proprietary models highlight the limitations of current models in handling physics-related tasks. We hope that our dataset and evaluation methodology will jointly advance the development of LLMs in the field of physics.