🤖 AI Summary
Existing primitive-based neural reconstruction methods struggle to effectively model high-frequency details, limiting rendering quality. This work proposes Neural Harmonic Textures, a novel representation that constructs virtual scaffolds around primitives to anchor latent features and interpolate them at ray-primitive intersection points. Inspired by Fourier analysis, the method employs periodic activations to reformulate alpha blending as a weighted sum of harmonic components, which are then decoded in a single pass through a lightweight network. By uniquely integrating harmonic signal modeling with primitive-based representations, the approach significantly enhances detail reproduction while maintaining real-time performance. It is compatible with mainstream frameworks such as 3D Gaussian Splatting and Triangle Splatting, achieving state-of-the-art results across tasks including novel view synthesis, 2D image fitting, and semantic reconstruction—effectively bridging the quality gap between primitive-based methods and neural fields.
📝 Abstract
Primitive-based methods such as 3D Gaussian Splatting have recently become the state-of-the-art for novel-view synthesis and related reconstruction tasks. Compared to neural fields, these representations are more flexible, adaptive, and scale better to large scenes. However, the limited expressivity of individual primitives makes modeling high-frequency detail challenging. We introduce Neural Harmonic Textures, a neural representation approach that anchors latent feature vectors on a virtual scaffold surrounding each primitive. These features are interpolated within the primitive at ray intersection points. Inspired by Fourier analysis, we apply periodic activations to the interpolated features, turning alpha blending into a weighted sum of harmonic components. The resulting signal is then decoded in a single deferred pass using a small neural network, significantly reducing computational cost. Neural Harmonic Textures yield state-of-the-art results in real-time novel view synthesis while bridging the gap between primitive- and neural-field-based reconstruction. Our method integrates seamlessly into existing primitive-based pipelines such as 3DGUT, Triangle Splatting, and 2DGS. We further demonstrate its generality with applications to 2D image fitting and semantic reconstruction.