Symmetric Rank-One Quasi-Newton Methods for Deep Learning Using Cubic Regularization

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
First-order methods (e.g., Adam) in deep neural network optimization neglect curvature information and are prone to stagnation at saddle points or poor local minima, while classical L-BFGS is constrained by the positive-definiteness assumption on Hessian approximations. To address these limitations, this work proposes a limited-memory symmetric rank-one (SR1) quasi-Newton method tailored for nonconvex optimization. Unlike L-BFGS, the SR1 update permits indefinite Hessian approximations and explicitly exploits negative curvature directions. It is integrated with an adaptive regularization framework for cubic subproblem solving, enabling closed-form analytical updates. This constitutes the first incorporation of indefinite SR1 updates together with a tractable cubic model into deep learning training. Experiments on autoencoders and feedforward networks demonstrate that the proposed method significantly outperforms Adam, AdaGrad, and L-BFGS in both convergence speed and generalization performance.

Technology Category

Application Category

📝 Abstract
Stochastic gradient descent and other first-order variants, such as Adam and AdaGrad, are commonly used in the field of deep learning due to their computational efficiency and low-storage memory requirements. However, these methods do not exploit curvature information. Consequently, iterates can converge to saddle points or poor local minima. On the other hand, Quasi-Newton methods compute Hessian approximations which exploit this information with a comparable computational budget. Quasi-Newton methods re-use previously computed iterates and gradients to compute a low-rank structured update. The most widely used quasi-Newton update is the L-BFGS, which guarantees a positive semi-definite Hessian approximation, making it suitable in a line search setting. However, the loss functions in DNNs are non-convex, where the Hessian is potentially non-positive definite. In this paper, we propose using a limited-memory symmetric rank-one quasi-Newton approach which allows for indefinite Hessian approximations, enabling directions of negative curvature to be exploited. Furthermore, we use a modified adaptive regularized cubics approach, which generates a sequence of cubic subproblems that have closed-form solutions with suitable regularization choices. We investigate the performance of our proposed method on autoencoders and feed-forward neural network models and compare our approach to state-of-the-art first-order adaptive stochastic methods as well as other quasi-Newton methods.x
Problem

Research questions and friction points this paper is trying to address.

Improves convergence avoiding saddle points
Exploits curvature information in deep learning
Enables indefinite Hessian approximations effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Symmetric rank-one quasi-Newton
Indefinite Hessian approximations
Adaptive regularized cubics approach
🔎 Similar Papers
No similar papers found.
A
Aditya Ranganath
Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550
Mukesh Singhal
Mukesh Singhal
University of California, Merced
Cloud ComputingDistributed ComputingCyber SecurityNetorking
R
Roummel F. Marcia
Applied Mathematics, University of California, Merced, 5200 N Lake Road, Merced, CA 95343