Benchmarking Foundation Models with Retrieval-Augmented Generation in Olympic-Level Physics Problem Solving

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The reasoning capabilities of foundation models on Olympiad-level physics problems—particularly when augmented with retrieval-augmented generation (RAG)—remain poorly understood, especially for expert-level scientific reasoning involving diagrams, tables, and mathematical formalism. Method: We introduce PhoPile, the first multimodal benchmark specifically designed for Olympiad physics, featuring authentic problem-solving elements including figures, tables, and equations. Our RAG framework jointly leverages large language models and multimodal models, integrating heterogeneous retrievers and a structured physics knowledge base. Contribution/Results: Experiments demonstrate that domain-adapted physical corpus retrieval substantially improves solution accuracy; however, multi-step, chain-of-thought reasoning remains a critical bottleneck. Beyond providing a high-quality, community-ready evaluation benchmark, this work systematically characterizes RAG’s efficacy and limitations in advanced scientific reasoning, revealing its augmentation mechanisms and failure modes. It establishes a novel paradigm for building trustworthy, domain-specific AI reasoning systems.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) with foundation models has achieved strong performance across diverse tasks, but their capacity for expert-level reasoning-such as solving Olympiad-level physics problems-remains largely unexplored. Inspired by the way students prepare for competitions by reviewing past problems, we investigate the potential of RAG to enhance physics reasoning in foundation models. We introduce PhoPile, a high-quality multimodal dataset specifically designed for Olympiad-level physics, enabling systematic study of retrieval-based reasoning. PhoPile includes diagrams, graphs, and equations, capturing the inherently multimodal nature of physics problem solving. Using PhoPile, we benchmark RAG-augmented foundation models, covering both large language models (LLMs) and large multimodal models (LMMs) with multiple retrievers. Our results demonstrate that integrating retrieval with physics corpora can improve model performance, while also highlighting challenges that motivate further research in retrieval-augmented physics reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating RAG models on solving Olympic-level physics problems
Investigating retrieval-augmented generation for physics reasoning enhancement
Benchmarking multimodal models with specialized physics corpora integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multimodal dataset for physics problem solving
Benchmarks RAG with multiple retrievers and models
Integrates retrieval with physics corpora to improve performance
🔎 Similar Papers
S
Shunfeng Zheng
AAII, University of Technology Sydney, New South Wales, Australia
Y
Yudi Zhang
Eindhoven University of Technology, Eindhoven, The Netherlands
Meng Fang
Meng Fang
University of Liverpool
Natural Language ProcessingReinforcement LearningAgentsArtificial intelligence
Z
Zihan Zhang
AAII, University of Technology Sydney, New South Wales, Australia
Z
Zhitan Wu
University of New South Wales, New South Wales, Australia
Mykola Pechenizkiy
Mykola Pechenizkiy
Eindhoven University of Technology
data miningpredictive analyticsfairnesstransparencyaccountability
L
Ling Chen
AAII, University of Technology Sydney, New South Wales, Australia