🤖 AI Summary
While Gaussian splatting achieves strong performance in novel view synthesis, its requirement of millions of primitives for highly textured scenes incurs prohibitive storage and computational overhead. This work addresses real-time novel view synthesis for sparse-geometry scenes by proposing *Nexels*: a hybrid representation that decouples geometry (surfels) from appearance—modeled jointly via a global NeRF-style neural field and per-surfel color parameters. To our knowledge, this is the first approach to co-model neural-textured surfels with a fixed-sampling neural field. Rendering employs pixel-level sparse texture sampling, enabling efficient representation without compromising visual fidelity. Experiments demonstrate significant gains: for outdoor scenes, voxel count reduces by 9.7× and memory usage by 5.5×; for indoor scenes, voxel count drops by 31× and memory by 3.7×; rendering speed improves by 2×, while visual quality surpasses existing textured primitive methods.
📝 Abstract
Though Gaussian splatting has achieved impressive results in novel view synthesis, it requires millions of primitives to model highly textured scenes, even when the geometry of the scene is simple. We propose a representation that goes beyond point-based rendering and decouples geometry and appearance in order to achieve a compact representation. We use surfels for geometry and a combination of a global neural field and per-primitive colours for appearance. The neural field textures a fixed number of primitives for each pixel, ensuring that the added compute is low. Our representation matches the perceptual quality of 3D Gaussian splatting while using $9.7 imes$ fewer primitives and $5.5 imes$ less memory on outdoor scenes and using $31 imes$ fewer primitives and $3.7 imes$ less memory on indoor scenes. Our representation also renders twice as fast as existing textured primitives while improving upon their visual quality.