🤖 AI Summary
Neural fields exhibit slow convergence under Adam-style stochastic optimization, hindering practical deployment. To address this, we propose the first curvature-aware diagonal preconditioning framework tailored for stochastic training of neural fields—overcoming the fundamental incompatibility of second-order methods (e.g., L-BFGS) with stochastic neural field optimization. Our method constructs a diagonal Hessian approximation from stochastic gradients, integrates adaptive learning-rate preconditioning with neural field parameterization, and supports end-to-end joint optimization for NeRF. Evaluated on image reconstruction, shape modeling, and NeRF tasks, it achieves an average 2.1× training speedup while maintaining or improving reconstruction accuracy and enhancing convergence stability. Our core contribution is the first theoretical foundation for second-order, curvature-aware diagonal preconditioning in the stochastic setting—realized as an efficient, scalable, and plug-and-play optimization accelerator for neural fields.
📝 Abstract
Neural fields encode continuous multidimensional signals as neural networks, enabling diverse applications in computer vision, robotics, and geometry. While Adam is effective for stochastic optimization, it often requires long training times. To address this, we explore alternative optimization techniques to accelerate training without sacrificing accuracy. Traditional second-order methods like L-BFGS are unsuitable for stochastic settings. We propose a theoretical framework for training neural fields with curvature-aware diagonal preconditioners, demonstrating their effectiveness across tasks such as image reconstruction, shape modeling, and Neural Radiance Fields (NeRF).