EVPGS: Enhanced View Prior Guidance for Splatting-based Extrapolated View Synthesis

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor generalization of Gaussian Splatting (GS) models for extrapolated view synthesis (EVS) under sparse-view settings, this paper proposes a coarse-to-fine two-stage view-prior-guided framework. At the coarse level, dual-level regularization—enforcing consistency in both appearance and geometry—is introduced; at the fine level, occlusion-aware depth estimation is integrated with prior distillation to enable end-to-end optimization. The method leverages multi-scale rendering constraints, view augmentation, and iterative prior refinement to significantly improve structural fidelity and texture consistency in extrapolated views. Evaluated on our newly introduced Merchandise3D dataset and multiple established benchmarks, it achieves state-of-the-art performance, with substantial gains in PSNR, SSIM, and LPIPS metrics. All code, datasets, and trained models are publicly released.

Technology Category

Application Category

📝 Abstract
Gaussian Splatting (GS)-based methods rely on sufficient training view coverage and perform synthesis on interpolated views. In this work, we tackle the more challenging and underexplored Extrapolated View Synthesis (EVS) task. Here we enable GS-based models trained with limited view coverage to generalize well to extrapolated views. To achieve our goal, we propose a view augmentation framework to guide training through a coarse-to-fine process. At the coarse stage, we reduce rendering artifacts due to insufficient view coverage by introducing a regularization strategy at both appearance and geometry levels. At the fine stage, we generate reliable view priors to provide further training guidance. To this end, we incorporate an occlusion awareness into the view prior generation process, and refine the view priors with the aid of coarse stage output. We call our framework Enhanced View Prior Guidance for Splatting (EVPGS). To comprehensively evaluate EVPGS on the EVS task, we collect a real-world dataset called Merchandise3D dedicated to the EVS scenario. Experiments on three datasets including both real and synthetic demonstrate EVPGS achieves state-of-the-art performance, while improving synthesis quality at extrapolated views for GS-based methods both qualitatively and quantitatively. We will make our code, dataset, and models public.
Problem

Research questions and friction points this paper is trying to address.

Enhance Gaussian Splatting for extrapolated view synthesis
Address limited view coverage in training for better generalization
Improve synthesis quality at extrapolated views with view augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

View augmentation framework for training guidance
Regularization at appearance and geometry levels
Occlusion-aware view prior generation
🔎 Similar Papers
No similar papers found.
J
Jiahe Li
MT Lab, Meitu Inc., Beijing 100083, China
Feiyu Wang
Feiyu Wang
Fudan University
computer vision
X
Xiaochao Qu
MT Lab, Meitu Inc., Beijing 100083, China
C
Chengjing Wu
MT Lab, Meitu Inc., Beijing 100083, China
Luoqi Liu
Luoqi Liu
Director of MT Lab; Meitu
Computer Vision
T
Ting Liu
MT Lab, Meitu Inc., Beijing 100083, China