Extreme Views: 3DGS Filter for Novel View Synthesis from Out-of-Distribution Camera Poses

📅 2025-10-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
3D Gaussian Splatting (3DGS) suffers from severe visual artifacts—such as noise, color bleeding, and geometric distortions—when rendering from out-of-distribution camera poses, due to uncertainties in density, color, and geometry prediction. To address this without retraining, we propose a rendering-aware filtering method that dynamically identifies and suppresses instability in anisotropic extrapolation regions via gradient-based sensitivity scores computed from intermediate features. The filter is plug-and-play, seamlessly integrating into existing 3DGS pipelines. Our approach preserves real-time inference (≥30 FPS) and high fidelity while significantly improving visual quality, photorealism, and cross-view consistency in novel-view synthesis. Experiments demonstrate consistent superiority over NeRF-based baselines—including BayesRays—across multiple benchmarks, achieving a better trade-off between extrapolation robustness and computational efficiency.

Technology Category

Application Category

📝 Abstract
When viewing a 3D Gaussian Splatting (3DGS) model from camera positions significantly outside the training data distribution, substantial visual noise commonly occurs. These artifacts result from the lack of training data in these extrapolated regions, leading to uncertain density, color, and geometry predictions from the model. To address this issue, we propose a novel real-time render-aware filtering method. Our approach leverages sensitivity scores derived from intermediate gradients, explicitly targeting instabilities caused by anisotropic orientations rather than isotropic variance. This filtering method directly addresses the core issue of generative uncertainty, allowing 3D reconstruction systems to maintain high visual fidelity even when users freely navigate outside the original training viewpoints. Experimental evaluation demonstrates that our method substantially improves visual quality, realism, and consistency compared to existing Neural Radiance Field (NeRF)-based approaches such as BayesRays. Critically, our filter seamlessly integrates into existing 3DGS rendering pipelines in real-time, unlike methods that require extensive post-hoc retraining or fine-tuning. Code and results at https://damian-bowness.github.io/EV3DGS
Problem

Research questions and friction points this paper is trying to address.

Reducing visual noise from out-of-distribution camera poses
Addressing generative uncertainty in 3D Gaussian Splatting models
Improving novel view synthesis without retraining or fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-time render-aware filtering for 3DGS
Uses sensitivity scores from intermediate gradients
Targets anisotropic instabilities for uncertainty reduction