🤖 AI Summary
End-to-end 3D geometric foundation models (GFMs) lack systematic, standardized evaluation—particularly for real-time 3D geometric perception.
Method: This work introduces the first comprehensive benchmark tailored to real-time 3D geometric perception, covering five core tasks: sparse/video depth estimation, 3D reconstruction, multi-view pose estimation, and novel view synthesis—with support for both in-distribution and out-of-distribution evaluation. We propose a standardized evaluation framework and an automated toolchain featuring unified interfaces for heterogeneous data sources, integrated metrics spanning camera geometry, point cloud registration, and differentiable rendering, and modular model integration with distributed evaluation capability.
Contribution/Results: We comprehensively evaluate 16 state-of-the-art models across six diverse datasets, revealing critical trade-offs among accuracy, generalization, and real-time performance. All code, preprocessing scripts, and benchmark data are publicly released to advance standardization and reproducibility in 3D spatial intelligence research.
📝 Abstract
Spatial intelligence, encompassing 3D reconstruction, perception, and reasoning, is fundamental to applications such as robotics, aerial imaging, and extended reality. A key enabler is the real-time, accurate estimation of core 3D attributes (camera parameters, point clouds, depth maps, and 3D point tracks) from unstructured or streaming imagery. Inspired by the success of large foundation models in language and 2D vision, a new class of end-to-end 3D geometric foundation models (GFMs) has emerged, directly predicting dense 3D representations in a single feed-forward pass, eliminating the need for slow or unavailable precomputed camera parameters. Since late 2023, the field has exploded with diverse variants, but systematic evaluation is lacking. In this work, we present the first comprehensive benchmark for 3D GFMs, covering five core tasks: sparse-view depth estimation, video depth estimation, 3D reconstruction, multi-view pose estimation, novel view synthesis, and spanning both standard and challenging out-of-distribution datasets. Our standardized toolkit automates dataset handling, evaluation protocols, and metric computation to ensure fair, reproducible comparisons. We evaluate 16 state-of-the-art GFMs, revealing their strengths and limitations across tasks and domains, and derive key insights to guide future model scaling and optimization. All code, evaluation scripts, and processed data will be publicly released to accelerate research in 3D spatial intelligence.