Second-order Optimization of Gaussian Splats with Importance Sampling

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow training of 3D Gaussian Splatting (3DGS) caused by reliance on first-order optimizers (e.g., Adam), this paper proposes LM-CG—the first sparse-aware second-order optimization framework tailored for 3DGS. Methodologically: (1) we design a matrix-free, GPU-parallel Levenberg–Marquardt (LM) solver that explicitly exploits the sparsity structure of the Jacobian; (2) we introduce dual importance sampling—over camera views and pixels—to drastically reduce gradient computation overhead; and (3) we propose a line-search-free, heuristic learning-rate strategy grounded in local curvature estimation. Experiments show that LM-CG accelerates training by 3× over standard LM and achieves up to 6× speedup over Adam in low-Gaussian-count regimes, while remaining competitive at moderate scales—significantly reducing time-to-convergence. This work pioneers the integration of sparsity-adapted second-order optimization into 3DGS training.

Technology Category

Application Category

📝 Abstract
3D Gaussian Splatting (3DGS) is widely used for novel view synthesis due to its high rendering quality and fast inference time. However, 3DGS predominantly relies on first-order optimizers such as Adam, which leads to long training times. To address this limitation, we propose a novel second-order optimization strategy based on Levenberg-Marquardt (LM) and Conjugate Gradient (CG), which we specifically tailor towards Gaussian Splatting. Our key insight is that the Jacobian in 3DGS exhibits significant sparsity since each Gaussian affects only a limited number of pixels. We exploit this sparsity by proposing a matrix-free and GPU-parallelized LM optimization. To further improve its efficiency, we propose sampling strategies for both the camera views and loss function and, consequently, the normal equation, significantly reducing the computational complexity. In addition, we increase the convergence rate of the second-order approximation by introducing an effective heuristic to determine the learning rate that avoids the expensive computation cost of line search methods. As a result, our method achieves a $3 imes$ speedup over standard LM and outperforms Adam by $~6 imes$ when the Gaussian count is low while remaining competitive for moderate counts. Project Page: https://vcai.mpi-inf.mpg.de/projects/LM-IS
Problem

Research questions and friction points this paper is trying to address.

Speeds up 3D Gaussian Splatting training with second-order optimization
Reduces computational cost via sparse Jacobian and GPU parallelism
Improves convergence with heuristic learning rates and sampling strategies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Second-order optimization with Levenberg-Marquardt and Conjugate Gradient
Matrix-free GPU-parallelized LM exploiting Jacobian sparsity
Efficient sampling strategies for camera views and loss function
🔎 Similar Papers
No similar papers found.