Neural Texture Splatting: Expressive 3D Gaussian Splatting for View Synthesis, Geometry, and Dynamic Reconstruction

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
3D Gaussian Splatting (3DGS) suffers from limited local modeling capacity due to fixed Gaussian kernels, struggling to simultaneously achieve geometric accuracy under sparse inputs, dynamic scene reconstruction, and high-fidelity novel-view synthesis. To address this, we propose Neural Texture Lattice (NTL), a lightweight neural field built upon triplane encoding and a shared neural decoder, enabling global–local co-adaptive representation. NTL explicitly models viewpoint- and time-dependent appearance and geometry by generating dynamic textures and displacements for each Gaussian primitive. This design enhances representational capacity while suppressing parameter redundancy, significantly improving generalization. Evaluated across diverse reconstruction scenarios—including static/dynamic scenes and sparse/dense inputs—NTL consistently outperforms existing 3DGS variants on multiple benchmarks. It achieves state-of-the-art results in rendering quality, geometric fidelity, and temporal consistency, while maintaining efficient optimization and real-time rendering.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) has emerged as a leading approach for high-quality novel view synthesis, with numerous variants extending its applicability to a broad spectrum of 3D and 4D scene reconstruction tasks. Despite its success, the representational capacity of 3DGS remains limited by the use of 3D Gaussian kernels to model local variations. Recent works have proposed to augment 3DGS with additional per-primitive capacity, such as per-splat textures, to enhance its expressiveness. However, these per-splat texture approaches primarily target dense novel view synthesis with a reduced number of Gaussian primitives, and their effectiveness tends to diminish when applied to more general reconstruction scenarios. In this paper, we aim to achieve concrete performance improvement over state-of-the-art 3DGS variants across a wide range of reconstruction tasks, including novel view synthesis, geometry and dynamic reconstruction, under both sparse and dense input settings. To this end, we introduce Neural Texture Splatting (NTS). At the core of our approach is a global neural field (represented as a hybrid of a tri-plane and a neural decoder) that predicts local appearance and geometric fields for each primitive. By leveraging this shared global representation that models local texture fields across primitives, we significantly reduce model size and facilitate efficient global information exchange, demonstrating strong generalization across tasks. Furthermore, our neural modeling of local texture fields introduces expressive view- and time-dependent effects, a critical aspect that existing methods fail to account for. Extensive experiments show that Neural Texture Splatting consistently improves models and achieves state-of-the-art results across multiple benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing 3D Gaussian Splatting's limited representational capacity for local variations
Improving reconstruction performance across view synthesis, geometry and dynamic scenarios
Enabling expressive view- and time-dependent effects in 3D scene reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses global neural field with tri-plane hybrid
Predicts local appearance and geometry per primitive
Models expressive view- and time-dependent texture effects
🔎 Similar Papers
No similar papers found.