4D-Bench: Benchmarking Multi-modal Large Language Models for 4D Object Understanding

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
There is currently no publicly available, standardized benchmark for evaluating multimodal large language models’ (MLLMs) understanding of 4D objects—i.e., objects characterized by both 3D spatial structure and temporal evolution. Method: We introduce 4D-Bench, the first standardized benchmark dedicated to 4D object understanding, comprising two core tasks: 4D question answering and 4D description. It integrates multi-view video sequences with dynamic 3D scenes and establishes a novel evaluation paradigm emphasizing joint spatiotemporal reasoning across viewpoints. To ensure broad applicability, we employ spatial alignment, temporal sampling, and cross-modal annotation techniques, enabling unified assessment of both open- and closed-weight MLLMs. Contribution/Results: 4D-Bench systematically captures geometric diversity, temporal complexity, and semantic richness. Empirical evaluation reveals a critical limitation in MLLMs’ temporal modeling: GPT-4o achieves only 63% accuracy on 4D QA (vs. 91% for humans), with time understanding substantially lagging behind appearance understanding—particularly among open-source models.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated impressive 2D image/video understanding capabilities. However, there are no publicly standardized benchmarks to assess the abilities of MLLMs in understanding the 4D objects (3D objects with temporal evolution over time). In this paper, we introduce 4D-Bench, the first benchmark to evaluate the capabilities of MLLMs in 4D object understanding, featuring tasks in 4D object Question Answering (4D object QA) and 4D object captioning. 4D-Bench provides 4D objects with diverse categories, high-quality annotations, and tasks necessitating multi-view spatial-temporal understanding, different from existing 2D image/video-based benchmarks. With 4D-Bench, we evaluate a wide range of open-source and closed-source MLLMs. The results from the 4D object captioning experiment indicate that MLLMs generally exhibit weaker temporal understanding compared to their appearance understanding, notably, while open-source models approach closed-source performance in appearance understanding, they show larger performance gaps in temporal understanding. 4D object QA yields surprising findings: even with simple single-object videos, MLLMs perform poorly, with state-of-the-art GPT-4o achieving only 63% accuracy compared to the human baseline of 91%. These findings highlight a substantial gap in 4D object understanding and the need for further advancements in MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Lack of standardized benchmarks for 4D object understanding in MLLMs
Evaluating MLLMs' spatial-temporal comprehension via 4D QA and captioning
Identifying performance gaps in temporal vs. appearance understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces 4D-Bench for MLLM evaluation
Features 4D object QA and captioning tasks
Highlights MLLM gaps in temporal understanding
🔎 Similar Papers
No similar papers found.
Wenxuan Zhu
Wenxuan Zhu
MS/PhD KAUST
B
Bing Li
King Abdullah University of Science and Technology
C
Cheng Zheng
King Abdullah University of Science and Technology
Jinjie Mai
Jinjie Mai
KAUST
3D Vision
J
Jun Chen
King Abdullah University of Science and Technology
Letian Jiang
Letian Jiang
Master Student, KAUST
CVMLAI
Abdullah Hamdi
Abdullah Hamdi
postdoctoral research fellow at University of Oxford
3D Deep LearningMulti-view for 3D Understanding and Generation
S
Sara Rojas Martinez
King Abdullah University of Science and Technology
C
Chia-Wen Lin
National Tsing Hua University
M
Mohamed Elhoseiny
King Abdullah University of Science and Technology
Bernard Ghanem
Bernard Ghanem
Professor, King Abdullah University of Science and Technology
computer visionmachine learning