🤖 AI Summary
This work addresses the coupling between update steps and gradient moments in 3D Gaussian Splatting (3DGS) optimization, which leads to distorted optimizer state scaling, redundant updates outside the field of view, and unstable regularization. For the first time, this study systematically uncovers this coupling mechanism and proposes a decoupled optimization framework that restructures the optimization process into three components: sparse Adam, reset-state regularization, and decoupled attribute regularization. Building upon the principles of AdamW, the authors design an efficient optimizer, AdamW-GS, tailored for 3DGS. Experiments demonstrate that the proposed method significantly improves reconstruction quality and optimization efficiency across multiple scenes, establishing a new paradigm for 3DGS optimization.
📝 Abstract
3D Gaussian Splatting (3DGS) has emerged as a powerful technique for real-time novel view synthesis. As an explicit representation optimized through gradient propagation among primitives, optimization widely accepted in deep neural networks (DNNs) is actually adopted in 3DGS, such as synchronous weight updating and Adam with the adaptive gradient. However, considering the physical significance and specific design in 3DGS, there are two overlooked details in the optimization of 3DGS: (i) update step coupling, which induces optimizer state rescaling and costly attribute updates outside the viewpoints, and (ii) gradient coupling in the moment, which may lead to under- or over-effective regularization. Nevertheless, such a complex coupling is under-explored. After revisiting the optimization of 3DGS, we take a step to decouple it and recompose the process into: Sparse Adam, Re-State Regularization and Decoupled Attribute Regularization. Taking a large number of experiments under the 3DGS and 3DGS-MCMC frameworks, our work provides a deeper understanding of these components. Finally, based on the empirical analysis, we re-design the optimization and propose AdamW-GS by re-coupling the beneficial components, under which better optimization efficiency and representation effectiveness are achieved simultaneously.