๐ค AI Summary
In structured neural network training, coupling nonsmooth regularization with adaptive diagonal preconditioning leads to update directions lacking closed-form solutions and renders subproblems computationally expensive to solve.
Method: We propose RAMDA, an optimization algorithm integrating the dual averaging framework, adaptive diagonal preconditioning, and a regularized momentum mechanism. Leveraging manifold identification theory, we design a verifiable inexact subproblem solving criterion and an efficient solver.
Contribution/Results: RAMDA is the first method to rigorously guarantee asymptotic identification of the optimal structure induced by the regularizer under inexact subproblem solutions, simultaneously ensuring local structural optimality and strong generalization. Empirically, it significantly outperforms state-of-the-art methods on large-scale vision, language, and speech tasksโimproving both training efficiency and learned model sparsity/structure quality. The implementation is publicly available.
๐ Abstract
We propose a Regularized Adaptive Momentum Dual Averaging (RAMDA) algorithm for training structured neural networks. Similar to existing regularized adaptive methods, the subproblem for computing the update direction of RAMDA involves a nonsmooth regularizer and a diagonal preconditioner, and therefore does not possess a closed-form solution in general. We thus also carefully devise an implementable inexactness condition that retains convergence guarantees similar to the exact versions, and propose a companion efficient solver for the subproblems of both RAMDA and existing methods to make them practically feasible. We leverage the theory of manifold identification in variational analysis to show that, even in the presence of such inexactness, the iterates of RAMDA attain the ideal structure induced by the regularizer at the stationary point of asymptotic convergence. This structure is locally optimal near the point of convergence, so RAMDA is guaranteed to obtain the best structure possible among all methods converging to the same point, making it the first regularized adaptive method outputting models that possess outstanding predictive performance while being (locally) optimally structured. Extensive numerical experiments in large-scale modern computer vision, language modeling, and speech tasks show that the proposed RAMDA is efficient and consistently outperforms state of the art for training structured neural network. Implementation of our algorithm is available at https://www.github.com/ismoptgroup/RAMDA/.