OB3D: A New Dataset for Benchmarking Omnidirectional 3D Reconstruction Using Blender

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Omnidirectional image-based 3D reconstruction suffers from geometric distortions inherent to equirectangular projection (ECP), particularly severe polar-region distortion, which degrades reconstruction accuracy; moreover, the field lacks standardized, systematic benchmark datasets. To address this, we introduce OB3D—the first synthetic benchmark specifically designed for omnidirectional 3D reconstruction. Built in Blender, OB3D comprises high-fidelity, diverse, and geometrically complex scenes rendered in ECP. It uniquely provides pixel-aligned depth maps, surface normal maps, ground-truth camera parameters, and multi-view omnidirectional RGB images. The dataset is fully compatible with modern radiance-field methods, including NeRF and 3D Gaussian Splatting (3DGS). OB3D enables rigorous, quantitative evaluation of omnidirectional reconstruction algorithms, significantly improving geometric accuracy and robustness—especially in polar and high-latitude regions. The dataset is publicly released, establishing a new community standard for omnidirectional 3D reconstruction research.

Technology Category

Application Category

📝 Abstract
Recent advancements in radiance field rendering, exemplified by Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS), have significantly progressed 3D modeling and reconstruction. The use of multiple 360-degree omnidirectional images for these tasks is increasingly favored due to advantages in data acquisition and comprehensive scene capture. However, the inherent geometric distortions in common omnidirectional representations, such as equirectangular projection (particularly severe in polar regions and varying with latitude), pose substantial challenges to achieving high-fidelity 3D reconstructions. Current datasets, while valuable, often lack the specific focus, scene composition, and ground truth granularity required to systematically benchmark and drive progress in overcoming these omnidirectional-specific challenges. To address this critical gap, we introduce Omnidirectional Blender 3D (OB3D), a new synthetic dataset curated for advancing 3D reconstruction from multiple omnidirectional images. OB3D features diverse and complex 3D scenes generated from Blender 3D projects, with a deliberate emphasis on challenging scenarios. The dataset provides comprehensive ground truth, including omnidirectional RGB images, precise omnidirectional camera parameters, and pixel-aligned equirectangular maps for depth and normals, alongside evaluation metrics. By offering a controlled yet challenging environment, OB3Daims to facilitate the rigorous evaluation of existing methods and prompt the development of new techniques to enhance the accuracy and reliability of 3D reconstruction from omnidirectional images.
Problem

Research questions and friction points this paper is trying to address.

Addressing geometric distortions in omnidirectional 3D reconstruction
Lack of specialized datasets for omnidirectional 3D benchmarking
Improving accuracy of 3D reconstruction from 360-degree images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Omnidirectional Blender 3D synthetic dataset
Diverse complex 3D scenes from Blender
Comprehensive ground truth for evaluation
🔎 Similar Papers
2024-08-15IEEE Workshop/Winter Conference on Applications of Computer VisionCitations: 1
S
Shintaro Ito
Tohoku University
N
Natsuki Takama
Tohoku University
T
Toshiki Watanabe
Tohoku University
K
Koichi Ito
Tohoku University
Hwann-Tzong Chen
Hwann-Tzong Chen
Professor of Computer Science, National Tsing Hua University
Computer Vision
T
T. Aoki
Tohoku University