Pose-Free Omnidirectional Gaussian Splatting for 360-Degree Videos with Consistent Depth Priors

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes PFGS360, the first method capable of reconstructing high-quality 3D Gaussian splatting scenes directly from 360° videos without relying on Structure-from-Motion (SfM) for camera poses. Existing omnidirectional 3D Gaussian splatting approaches depend on time-consuming SfM pipelines to obtain camera poses and sparse point clouds, making them ill-suited for pose-free 360° video inputs. PFGS360 addresses this limitation through three key components: a spherical consistency-aware pose estimation module, a depth inlier-driven Gaussian densification strategy, and monocular depth prior-guided 3D Gaussian optimization. Extensive experiments demonstrate that PFGS360 significantly outperforms both pose-based and pose-free 3D Gaussian splatting methods on real-world and synthetic 360° videos, achieving high-fidelity novel view synthesis.

Technology Category

Application Category

📝 Abstract
Omnidirectional 3D Gaussian Splatting with panoramas is a key technique for 3D scene representation, and existing methods typically rely on slow SfM to provide camera poses and sparse points priors. In this work, we propose a pose-free omnidirectional 3DGS method, named PFGS360, that reconstructs 3D Gaussians from unposed omnidirectional videos. To achieve accurate camera pose estimation, we first construct a spherical consistency-aware pose estimation module, which recovers poses by establishing consistent 2D-3D correspondences between the reconstructed Gaussians and the unposed images using Gaussians' internal depth priors. Besides, to enhance the fidelity of novel view synthesis, we introduce a depth-inlier-aware densification module to extract depth inliers and Gaussian outliers with consistent monocular depth priors, enabling efficient Gaussian densification and achieving photorealistic novel view synthesis. The experiments show significant outperformance over existing pose-free and pose-aware 3DGS methods on both real-world and synthetic 360-degree videos. Code is available at https://github.com/zcq15/PFGS360.
Problem

Research questions and friction points this paper is trying to address.

pose-free
omnidirectional
3D Gaussian Splatting
360-degree video
depth priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

pose-free
omnidirectional Gaussian Splatting
depth priors
spherical consistency
novel view synthesis
🔎 Similar Papers
No similar papers found.
C
Chuanqing Zhuang
School of Artificial Intelligence, University of Chinese Academy of Sciences
X
Xin Lu
School of Artificial Intelligence, University of Chinese Academy of Sciences
Z
Zehui Deng
School of Artificial Intelligence, University of Chinese Academy of Sciences
Zhengda Lu
Zhengda Lu
中国科学院大学
计算机图形学、计算机视觉
Yiqun Wang
Yiqun Wang
Chongqing University ⇐ KAUST.edu.sa ⇐ ia.CAS.cn
Computer GraphicsGeometric LearningGeometric Processing
J
Junqi Diao
Air Force Engineering University
J
Jun Xiao
School of Artificial Intelligence, University of Chinese Academy of Sciences