Enhancing Novel View Synthesis from extremely sparse views with SfM-free 3D Gaussian Splatting Framework

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address geometric distortions and severe rendering degradation in 3D Gaussian Splatting (3DGS) under extremely sparse-view settings (e.g., only two input views), where traditional Structure-from-Motion (SfM) fails, this paper proposes an SfM-free end-to-end joint optimization framework. Our method replaces SfM-based geometry initialization with a learnable dense stereo matching module; introduces a coherent view interpolation network to generate view-consistent supervision signals; and designs multi-scale Laplacian consistency regularization alongside adaptive spatially aware geometric constraints to jointly optimize camera poses and 3D Gaussian parameters. Under extreme sparsity, our approach achieves a 2.75 dB PSNR improvement over the state of the art, significantly suppressing distortions, enhancing high-frequency details, and delivering superior visual quality compared to existing methods.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has demonstrated remarkable real-time performance in novel view synthesis, yet its effectiveness relies heavily on dense multi-view inputs with precisely known camera poses, which are rarely available in real-world scenarios. When input views become extremely sparse, the Structure-from-Motion (SfM) method that 3DGS depends on for initialization fails to accurately reconstruct the 3D geometric structures of scenes, resulting in degraded rendering quality. In this paper, we propose a novel SfM-free 3DGS-based method that jointly estimates camera poses and reconstructs 3D scenes from extremely sparse-view inputs. Specifically, instead of SfM, we propose a dense stereo module to progressively estimates camera pose information and reconstructs a global dense point cloud for initialization. To address the inherent problem of information scarcity in extremely sparse-view settings, we propose a coherent view interpolation module that interpolates camera poses based on training view pairs and generates viewpoint-consistent content as additional supervision signals for training. Furthermore, we introduce multi-scale Laplacian consistent regularization and adaptive spatial-aware multi-scale geometry regularization to enhance the quality of geometrical structures and rendered content. Experiments show that our method significantly outperforms other state-of-the-art 3DGS-based approaches, achieving a remarkable 2.75dB improvement in PSNR under extremely sparse-view conditions (using only 2 training views). The images synthesized by our method exhibit minimal distortion while preserving rich high-frequency details, resulting in superior visual quality compared to existing techniques.
Problem

Research questions and friction points this paper is trying to address.

Overcoming 3DGS dependence on dense multi-view inputs with precise camera poses
Solving Structure-from-Motion failure in extremely sparse-view reconstruction scenarios
Addressing information scarcity in novel view synthesis from only 2 views
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dense stereo module for pose estimation and point cloud initialization
Coherent view interpolation for additional supervision signals
Multi-scale regularization for enhanced geometry and rendering quality
🔎 Similar Papers
No similar papers found.
Zongqi He
Zongqi He
Student, The Hong Kong Polytechnic University
Computer version3D reconstructionLow-level vision
Hanmin Li
Hanmin Li
School of Intelligent Systems Engineering, Sun Yat-sen University
K
Kin-Chung Chan
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
Yushen Zuo
Yushen Zuo
The Hong Kong Polytechnic University
Computer visionDeep learningImage Generation
H
Hao Xie
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
Zhe Xiao
Zhe Xiao
The Hong Kong Polytechnic University
Computer VisionImage ProcessingNeural Rendering
J
Jun Xiao
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University
K
Kin-Man Lam
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University