IBGS: Image-Based Gaussian Splatting

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
3D Gaussian Splatting (3DGS) is limited by low-order spherical harmonics (SH), struggling to represent high-frequency color details and view-dependent effects—e.g., specular highlights—while existing texture-enhancement approaches suffer from either poor generalization (global textures) or excessive storage overhead (per-Gaussian textures). This paper proposes an image-space residual modeling framework: leveraging the 3D Gaussian point cloud as geometric prior, it learns pixel-wise color residuals from local neighborhoods in high-resolution source images and fuses them into the SH-based base color for novel-view synthesis. Crucially, this avoids explicit texture storage, preserving 3DGS’s memory efficiency while significantly improving color fidelity, surface-geometry alignment, and reconstruction of view-dependent details—including specular highlights. On standard novel-view synthesis (NVS) benchmarks, our method consistently outperforms state-of-the-art Gaussian-based approaches, especially in high-frequency structural details and reflective appearance.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has recently emerged as a fast, high-quality method for novel view synthesis (NVS). However, its use of low-degree spherical harmonics limits its ability to capture spatially varying color and view-dependent effects such as specular highlights. Existing works augment Gaussians with either a global texture map, which struggles with complex scenes, or per-Gaussian texture maps, which introduces high storage overhead. We propose Image-Based Gaussian Splatting, an efficient alternative that leverages high-resolution source images for fine details and view-specific color modeling. Specifically, we model each pixel color as a combination of a base color from standard 3DGS rendering and a learned residual inferred from neighboring training images. This promotes accurate surface alignment and enables rendering images of high-frequency details and accurate view-dependent effects. Experiments on standard NVS benchmarks show that our method significantly outperforms prior Gaussian Splatting approaches in rendering quality, without increasing the storage footprint.
Problem

Research questions and friction points this paper is trying to address.

Captures spatially varying color and view-dependent effects in 3DGS
Reduces storage overhead compared to per-Gaussian texture mapping
Enables high-frequency details and accurate view-dependent rendering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages high-resolution source images for fine details
Models pixel color as base color plus learned residual
Enables high-frequency details without storage increase
🔎 Similar Papers
No similar papers found.
H
Hoang Chuong Nguyen
Australian National University
W
Wei Mao
NVIDIA
J
Jose M. Alvarez
NVIDIA
Miaomiao Liu
Miaomiao Liu
Australian National University
Computer VisionMachine Learning