🤖 AI Summary
Existing adaptive optimizers (e.g., Adam) converge rapidly but suffer from poor generalization, primarily due to their difficulty in converging to flat minima. To address this, we propose Frankenstein—a novel optimizer featuring a state-aware dynamic momentum scheduling mechanism that adaptively modulates first- and second-moment coefficients in real time, thereby preserving fast convergence while promoting preference for flat minima. We introduce, for the first time in optimization analysis, centered kernel alignment (CKA) and loss surface visualization to uncover the relationship between adaptive algorithm dynamics and generalization performance. Extensive evaluation across diverse benchmarks—including computer vision, natural language processing, few-shot learning, and scientific computing—demonstrates that Frankenstein consistently outperforms Adam, RMSProp, and SGD: it matches the convergence speed of adaptive methods while achieving generalization performance on par with—or even surpassing—that of SGD, thus jointly enhancing both training efficiency and model robustness.
📝 Abstract
Gradient-based optimization drives the unprecedented performance of modern deep neural network models across diverse applications. Adaptive algorithms have accelerated neural network training due to their rapid convergence rates; however, they struggle to find ``flat minima"reliably, resulting in suboptimal generalization compared to stochastic gradient descent (SGD). By revisiting various adaptive algorithms' mechanisms, we propose the Frankenstein optimizer, which combines their advantages. The proposed Frankenstein dynamically adjusts first- and second-momentum coefficients according to the optimizer's current state to directly maintain consistent learning dynamics and immediately reflect sudden gradient changes. Extensive experiments across several research domains such as computer vision, natural language processing, few-shot learning, and scientific simulations show that Frankenstein surpasses existing adaptive algorithms and SGD empirically regarding convergence speed and generalization performance. Furthermore, this research deepens our understanding of adaptive algorithms through centered kernel alignment analysis and loss landscape visualization during the learning process.