Pixels to Principles: Probing Intuitive Physics Understanding in Multimodal Language Models

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates multimodal large language models (MLLMs) on intuitive physical reasoning, specifically their ability to distinguish physically plausible from implausible scenes. Method: Leveraging the GRASP and IntPhys-2 benchmarks, we conduct cross-modal alignment analysis at critical processing stages of leading MLLMs—including InternVL, Qwen, LLaVA, and Gemini—using embedding-space probing techniques. Contribution/Results: We find that state-of-the-art MLLMs consistently fail to reliably perform physical plausibility judgments. Crucially, this failure stems not from inadequate visual encoding—visual features retain salient physical cues—but from weak cross-modal integration: the language module fails to effectively decode and utilize those cues. To our knowledge, this is the first work to diagnose the modality alignment bottleneck via intermediate representation probing, yielding an interpretable, mechanistic account of MLLM limitations in physical reasoning and identifying concrete directions for architectural and training-level optimization.

Technology Category

Application Category

📝 Abstract
This paper presents a systematic evaluation of state-of-the-art multimodal large language models (MLLMs) on intuitive physics tasks using the GRASP and IntPhys 2 datasets. We assess the open-source models InternVL 2.5, Qwen 2.5 VL, LLaVA-OneVision, and the proprietary Gemini 2.0 Flash Thinking, finding that even the latest models struggle to reliably distinguish physically plausible from implausible scenarios. To go beyond performance metrics, we conduct a probing analysis of model embeddings, extracting intermediate representations at key processing stages to examine how well task-relevant information is preserved. Our results show that, depending on task difficulty, a critical vision-language misalignment can emerge: vision encoders successfully capture physical plausibility cues, but this information is not effectively utilized by the language model, leading to failures in reasoning. This misalignment suggests that the primary limitation of MLLMs in intuitive physics tasks is not the vision component but the ineffective integration of visual and linguistic information. Our findings highlight vision-language alignment as a key area for improvement, offering insights for future MLLMs development.
Problem

Research questions and friction points this paper is trying to address.

Evaluating MLLMs on intuitive physics tasks using datasets GRASP and IntPhys 2
Identifying vision-language misalignment as a key limitation in MLLMs
Assessing ineffective integration of visual and linguistic information in models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates MLLMs on intuitive physics tasks
Probes model embeddings for information preservation
Identifies vision-language misalignment as key limitation
🔎 Similar Papers
No similar papers found.