🤖 AI Summary
Omnidirectional image-based 3D reconstruction suffers from geometric distortions inherent to equirectangular projection (ECP), particularly severe polar-region distortion, which degrades reconstruction accuracy; moreover, the field lacks standardized, systematic benchmark datasets. To address this, we introduce OB3D—the first synthetic benchmark specifically designed for omnidirectional 3D reconstruction. Built in Blender, OB3D comprises high-fidelity, diverse, and geometrically complex scenes rendered in ECP. It uniquely provides pixel-aligned depth maps, surface normal maps, ground-truth camera parameters, and multi-view omnidirectional RGB images. The dataset is fully compatible with modern radiance-field methods, including NeRF and 3D Gaussian Splatting (3DGS). OB3D enables rigorous, quantitative evaluation of omnidirectional reconstruction algorithms, significantly improving geometric accuracy and robustness—especially in polar and high-latitude regions. The dataset is publicly released, establishing a new community standard for omnidirectional 3D reconstruction research.
📝 Abstract
Recent advancements in radiance field rendering, exemplified by Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), have significantly progressed 3D modeling and reconstruction. The use of multiple 360-degree omnidirectional images for these tasks is increasingly favored due to advantages in data acquisition and comprehensive scene capture. However, the inherent geometric distortions in common omnidirectional representations, such as equirectangular projection (particularly severe in polar regions and varying with latitude), pose substantial challenges to achieving high-fidelity 3D reconstructions. Current datasets, while valuable, often lack the specific focus, scene composition, and ground truth granularity required to systematically benchmark and drive progress in overcoming these omnidirectional-specific challenges. To address this critical gap, we introduce Omnidirectional Blender 3D (OB3D), a new synthetic dataset curated for advancing 3D reconstruction from multiple omnidirectional images. OB3D features diverse and complex 3D scenes generated from Blender 3D projects, with a deliberate emphasis on challenging scenarios. The dataset provides comprehensive ground truth, including omnidirectional RGB images, precise omnidirectional camera parameters, and pixel-aligned equirectangular maps for depth and normals, alongside evaluation metrics. By offering a controlled yet challenging environment, OB3Daims to facilitate the rigorous evaluation of existing methods and prompt the development of new techniques to enhance the accuracy and reliability of 3D reconstruction from omnidirectional images.