🤖 AI Summary
This work addresses the limitations of existing 360° camera-based 3D reconstruction methods, which typically rely on restrictive capture protocols and preprocessing steps and are highly susceptible to interference from the operator appearing in the scene. We propose an end-to-end practical pipeline that directly reconstructs scenes from raw 360° dual-fisheye images without requiring any specialized acquisition guidelines or preprocessing, effectively handling the persistent presence of the operator in the field of view. To our knowledge, this is the first method enabling preprocessing-free, casual 360° capture for 3D reconstruction. We also introduce the first multi-level benchmark dataset tailored to this task and enhance the 3D Gaussian Splatting (3DGS) framework to better suit such unconstrained inputs. Experiments demonstrate that our approach significantly outperforms both the original 3DGS and robust perspective-based reconstruction baselines on our dataset, confirming the advantages of casual 360° capture for real-world scene reconstruction.
📝 Abstract
Radiance fields have emerged as powerful tools for 3D scene reconstruction. However, casual capture remains challenging due to the narrow field of view of perspective cameras, which limits viewpoint coverage and feature correspondences necessary for reliable camera calibration and reconstruction. While commercially available 360$^\circ$ cameras offer significantly broader coverage than perspective cameras for the same capture effort, existing 360$^\circ$ reconstruction methods require special capture protocols and pre-processing steps that undermine the promise of radiance fields: effortless workflows to capture and reconstruct 3D scenes. We propose a practical pipeline for reconstructing 3D scenes directly from raw 360$^\circ$ camera captures. We require no special capture protocols or pre-processing, and exhibit robustness to a prevalent source of reconstruction errors: the human operator that is visible in all 360$^\circ$ imagery. To facilitate evaluation, we introduce a multi-tiered dataset of scenes captured as raw dual-fisheye images, establishing a benchmark for robust casual 360$^\circ$ reconstruction. Our method significantly outperforms not only vanilla 3DGS for 360$^\circ$ cameras but also robust perspective baselines when perspective cameras are simulated from the same capture, demonstrating the advantages of 360$^\circ$ capture for casual reconstruction. Additional results are available at: https://theialab.github.io/fullcircle