Regularized Adaptive Momentum Dual Averaging with an Efficient Inexact Subproblem Solver for Training Structured Neural Network

๐Ÿ“… 2024-03-21
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In structured neural network training, coupling nonsmooth regularization with adaptive diagonal preconditioning leads to update directions lacking closed-form solutions and renders subproblems computationally expensive to solve. Method: We propose RAMDA, an optimization algorithm integrating the dual averaging framework, adaptive diagonal preconditioning, and a regularized momentum mechanism. Leveraging manifold identification theory, we design a verifiable inexact subproblem solving criterion and an efficient solver. Contribution/Results: RAMDA is the first method to rigorously guarantee asymptotic identification of the optimal structure induced by the regularizer under inexact subproblem solutions, simultaneously ensuring local structural optimality and strong generalization. Empirically, it significantly outperforms state-of-the-art methods on large-scale vision, language, and speech tasksโ€”improving both training efficiency and learned model sparsity/structure quality. The implementation is publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
We propose a Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. Similar to existing regularized adaptive methods, the subproblem for computing the update direction of RAMDA involves a nonsmooth regularizer and a diagonal preconditioner, and therefore does not possess a closed-form solution in general. We thus also carefully devise an implementable inexactness condition that retains convergence guarantees similar to the exact versions, and propose a companion efficient solver for the subproblems of both RAMDA and existing methods to make them practically feasible. We leverage the theory of manifold identification in variational analysis to show that, even in the presence of such inexactness, the iterates of RAMDA attain the ideal structure induced by the regularizer at the stationary point of asymptotic convergence. This structure is locally optimal near the point of convergence, so RAMDA is guaranteed to obtain the best structure possible among all methods converging to the same point, making it the first regularized adaptive method outputting models that possess outstanding predictive performance while being (locally) optimally structured. Extensive numerical experiments in large-scale modern computer vision, language modeling, and speech tasks show that the proposed RAMDA is efficient and consistently outperforms state of the art for training structured neural network. Implementation of our algorithm is available at https://www.github.com/ismoptgroup/RAMDA/.
Problem

Research questions and friction points this paper is trying to address.

Training structured neural networks with nonsmooth regularizers
Developing inexact subproblem solver with convergence guarantees
Achieving locally optimal model structure for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regularized Adaptive Momentum Dual Averaging algorithm
Inexact subproblem solver with convergence guarantees
Manifold identification theory for optimal structure
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zih-Syuan Huang
Department of Computer Science and Information Engineering, National Taiwan University, Taipei 106, Taiwan
C
Ching-pei Lee
Department of Advanced Data Science, Institute of Statistical Mathematics, Tachikawa, Tokyo 190-8562, Japan