Towards a Large Physics Benchmark

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the lack of standardized evaluation benchmarks for large language models (LLMs) in foundational physics. Methodologically, it introduces the first community-driven, “living” physics benchmark framework, featuring a dynamic assessment suite spanning conceptual understanding, mathematical derivation, and complex problem solving. It innovatively incorporates a three-dimensional expert scoring rubric—grounded in philosophy of science—that evaluates correctness, difficulty, and surprisingness (i.e., novelty or non-obvious insight), and integrates multimodal tasks including multiple-choice questions, step-by-step derivations, open-ended reasoning, and high-energy physics event classification. Key contributions include: (1) an extensible, openly accessible benchmark platform enabling continuous community contribution and evolution; (2) the first systematic, quantitative evaluation of both physical reasoning capability and scientific creativity in LLMs; and (3) advancement of substantive integration between AI research and physics.

Technology Category

Application Category

📝 Abstract
We introduce a benchmark framework developed by and for the scientific community to evaluate, monitor and steer large language model development in fundamental physics. Building on philosophical concepts of scientific understanding and creativity, we develop a scoring system in which each question is scored by an expert for its correctness, difficulty, and surprise. The questions are of three forms: (i) multiple-choice questions for conceptual understanding, (ii) analytical problems requiring mathematical derivation, and (iii) openended tasks requiring complex problem solving. Our current dataset contains diverse set of examples, including a machine learning challenge to classify high-energy physics events, such as the four top quark signal. To ensure continued relevance, we propose a living benchmark, where physicists contribute questions, for instance alongside new publications. We invite contributions via: http://www.physicsbenchmarks.org/. We hope that this benchmark will enable a targeted AI development that can make a meaningful contribution to fundamental physics research.
Problem

Research questions and friction points this paper is trying to address.

Evaluate large language models in fundamental physics research
Develop scoring system for correctness, difficulty, and surprise
Create living benchmark with physicist-contributed questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark framework for physics AI evaluation
Scoring system with expert-rated criteria
Living benchmark with community contributions
🔎 Similar Papers
No similar papers found.