π€ AI Summary
Existing 3D Gaussian Splatting (3DGS) methods suffer from severe artifacts and hole-filling failures under large viewpoint deviations, hindering unconstrained scene exploration. To address this, we propose Wild-Exploreβa novel framework comprising three key components: (1) an information-gain-driven virtual camera sampling strategy to actively expand the coverage of training views; (2) the first integration of video diffusion model priors into 3DGS, generating structurally consistent prior images to regularize neural rendering; and (3) joint optimization of Gaussian parameters and differentiable rendering. Evaluated on our newly established Wild-Explore benchmark, the method significantly improves reconstruction fidelity under large viewpoint shifts. It enables artifact-free, hole-free, real-time novel-view synthesis from arbitrary angles, facilitating high-fidelity, seamless 3D scene exploration.
π Abstract
Recent advances in novel view synthesis (NVS) have enabled real-time rendering with 3D Gaussian Splatting (3DGS). However, existing methods struggle with artifacts and missing regions when rendering from viewpoints that deviate from the training trajectory, limiting seamless scene exploration. To address this, we propose a 3DGS-based pipeline that generates additional training views to enhance reconstruction. We introduce an information-gain-driven virtual camera placement strategy to maximize scene coverage, followed by video diffusion priors to refine rendered results. Fine-tuning 3D Gaussians with these enhanced views significantly improves reconstruction quality. To evaluate our method, we present Wild-Explore, a benchmark designed for challenging scene exploration. Experiments demonstrate that our approach outperforms existing 3DGS-based methods, enabling high-quality, artifact-free rendering from arbitrary viewpoints.
https://exploregs.github.io