🤖 AI Summary
Traditional alpha blending in 3D Gaussian Splatting (3DGS) suffers from scaling artifacts—blurring under magnification and staircasing under minification—when rendering novel views at unseen sampling rates, due to fixed per-pixel scalar alpha modeling. To address this, we propose Spatially Distributed Alpha Blending (SDAM), which models alpha and transmittance as continuous spatial distributions rather than scalars, enabling background Gaussians to dynamically contribute to rendering without introducing extra parameters or computational overhead. By integrating transmittance over space and optimizing via differentiable rendering, SDAM effectively mitigates scaling distortions while preserving fine details at unseen resolutions and viewpoints. Experiments demonstrate state-of-the-art performance across multiple novel view synthesis benchmarks. SDAM is fully compatible with the original 3DGS framework and supports real-time rendering.
📝 Abstract
The recent introduction of 3D Gaussian Splatting (3DGS) has significantly advanced novel view synthesis. Several studies have further improved the rendering quality of 3DGS, yet they still exhibit noticeable visual discrepancies when synthesizing views at sampling rates unseen during training. Specifically, they suffer from (i) erosion-induced blurring artifacts when zooming in and (ii) dilation-induced staircase artifacts when zooming out. We speculate that these artifacts arise from the fundamental limitation of the alpha blending adopted in 3DGS methods. Instead of the conventional alpha blending that computes alpha and transmittance as scalar quantities over a pixel, we propose to replace it with our novel Gaussian Blending that treats alpha and transmittance as spatially varying distributions. Thus, transmittances can be updated considering the spatial distribution of alpha values across the pixel area, allowing nearby background splats to contribute to the final rendering. Our Gaussian Blending maintains real-time rendering speed and requires no additional memory cost, while being easily integrated as a drop-in replacement into existing 3DGS-based or other NVS frameworks. Extensive experiments demonstrate that Gaussian Blending effectively captures fine details at various sampling rates unseen during training, consistently outperforming existing novel view synthesis models across both unseen and seen sampling rates.