🤖 AI Summary
This work addresses the inefficiency of existing Gaussian splatting methods that rely on fixed square textures, which suffer from suboptimal memory usage and limited adaptability to scene detail variations. The authors propose an adaptive anisotropic texture representation for Gaussians, introducing—for the first time—anisotropic textures coupled with a gradient-guided adaptive allocation mechanism within the Gaussian splatting framework. This approach dynamically optimizes both the resolution and aspect ratio of the texture for each primitive by integrating learnable texture mapping with anisotropic Gaussian modeling. The resulting method enables detail-aware, non-uniform texture allocation, achieving rendering fidelity comparable to or better than fixed-texture approaches while using significantly less memory across multiple benchmark datasets.
📝 Abstract
Gaussian Splatting has emerged as a powerful representation for high-quality, real-time 3D scene rendering. While recent works extend Gaussians with learnable textures to enrich visual appearance, existing approaches allocate a fixed square texture per primitive, leading to inefficient memory usage and limited adaptability to scene variability. In this paper, we introduce adaptive anisotropic textured Gaussians (A$^2$TG), a novel representation that generalizes textured Gaussians by equipping each primitive with an anisotropic texture. Our method employs a gradient-guided adaptive rule to jointly determine texture resolution and aspect ratio, enabling non-uniform, detail-aware allocation that aligns with the anisotropic nature of Gaussian splats. This design significantly improves texture efficiency, reducing memory consumption while enhancing image quality. Experiments on multiple benchmark datasets demonstrate that A TG consistently outperforms fixed-texture Gaussian Splatting methods, achieving comparable rendering fidelity with substantially lower memory requirements.