🤖 AI Summary
Conventional adaptive optimizers (e.g., Adam) ignore the intrinsic low-rankness and approximate block-diagonal structure of parameter matrices/tensors in deep networks, limiting convergence efficiency. Method: We propose the first structured adaptive optimization algorithm that explicitly incorporates gradient structural priors into the adaptive framework. Our approach designs a dynamically updated structured preconditioner jointly modeling gradient low-rankness and Hessian block-diagonality, integrating low-rank gradient approximation with block-diagonal Hessian estimation. Contribution/Results: We theoretically establish a faster convergence rate than mainstream structured optimizers and characterize the explicit mechanism by which structural priors accelerate convergence. Empirically, on language modeling tasks, our method significantly improves both training efficiency and final model performance, validating the effectiveness and practicality of structural priors in adaptive optimization.
📝 Abstract
Training deep neural networks (DNNs) is a structured optimization problem, because the parameters are naturally represented by matrices and tensors rather than simple vectors. Under this structural representation, it has been widely observed that gradients are low-rank and Hessians are approximately block-wise diagonal. These structured properties are crucial for designing efficient optimization algorithms but may not be utilized by current popular optimizers like Adam. In this paper, we present a novel optimization algorithm ASGO that capitalizes on these properties by employing a preconditioner that is adaptively updated using structured gradients. By fine-grained theoretical analysis, ASGO is proven to achieve superior convergence rates compared to existing structured gradient methods. Based on the convergence theory, we further demonstrate that ASGO can benefit from the low-rank and block-wise diagonal properties. We also discuss practical modifications of ASGO and empirically verify the effectiveness of the algorithm on language model tasks.