🤖 AI Summary
Although the Adam optimizer exhibits rapid convergence, it tends to converge to sharp minima that compromise generalization performance. To address this limitation, this work proposes Inverse Adam (InvAdam), which enhances the optimizer’s ability to escape sharp minima by element-wise multiplying—rather than dividing—the first- and second-order moments. The dynamical behavior of InvAdam is analyzed through the lens of diffusion theory. Building upon this insight, the authors further integrate Adam and InvAdam into a unified framework termed DualAdam, which preserves fast convergence while substantially improving generalization. Empirical evaluations demonstrate that DualAdam consistently outperforms Adam and its state-of-the-art variants across both image classification tasks and fine-tuning of large language models.
📝 Abstract
In the training of neural networks, adaptive moment estimation (Adam) typically converges fast but exhibits suboptimal generalization performance. A widely accepted explanation for its defect in generalization is that it often tends to converge to sharp minima. To enhance its ability to find flat minima, we propose its new variant named inverse Adam (InvAdam). The key improvement of InvAdam lies in its parameter update mechanism, which is opposite to that of Adam. Specifically, it computes element-wise multiplication of the first-order and second-order moments, while Adam computes the element-wise division of these two moments. This modification aims to increase the step size of the parameter update when the elements in the second-order moments are large and vice versa, which helps the parameter escape sharp minima and stay at flat ones. However, InvAdam's update mechanism may face challenges in convergence. To address this challenge, we propose dual Adam (DualAdam), which integrates the update mechanisms of both Adam and InvAdam, ensuring convergence while enhancing generalization performance. Additionally, we introduce the diffusion theory to mathematically demonstrate InvAdam's ability to escape sharp minima. Extensive experiments are conducted on image classification tasks and large language model (LLM) fine-tuning. The results validate that DualAdam outperforms Adam and its state-of-the-art variants in terms of generalization performance. The code is publicly available at https://github.com/LongJin-lab/DualAdam.