ADAM Optimization with Adaptive Batch Selection

📅 2025-12-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inefficient convergence in Adam caused by uneven sample contributions, this paper proposes Game-Adam—a博弈-enhanced variant that integrates combinatorial game theory and multi-armed bandit mechanisms into Adam to enable adaptive batch sampling via modeling inter-sample interaction effects. Methodologically, we design a joint feedback mechanism to dynamically optimize batch composition and theoretically establish a tighter regret bound. Experiments across image classification and language modeling tasks demonstrate that Game-Adam achieves 15–32% faster convergence than standard Adam and existing game-theoretic optimizers, while attaining higher final accuracy and improved training stability. The core innovation lies in the first synergistic integration of combinatorial game-theoretic modeling with adaptive learning rate optimization, enabling sample-level contribution awareness for efficient training.

Technology Category

Application Category

📝 Abstract
Adam is a widely used optimizer in neural network training due to its adaptive learning rate. However, because different data samples influence model updates to varying degrees, treating them equally can lead to inefficient convergence. To address this, a prior work proposed adapting the sampling distribution using a bandit framework to select samples adaptively. While promising, the bandit-based variant of Adam suffers from limited theoretical guarantees. In this paper, we introduce Adam with Combinatorial Bandit Sampling (AdamCB), which integrates combinatorial bandit techniques into Adam to resolve these issues. AdamCB is able to fully utilize feedback from multiple samples at once, enhancing both theoretical guarantees and practical performance. Our regret analysis shows that AdamCB achieves faster convergence than Adam-based methods including the previous bandit-based variant. Numerical experiments demonstrate that AdamCB consistently outperforms existing methods.
Problem

Research questions and friction points this paper is trying to address.

AdamCB improves Adam's convergence via adaptive batch selection
It addresses inefficiency from equal sample treatment in training
The method enhances theoretical guarantees and practical performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

AdamCB integrates combinatorial bandit sampling
It utilizes feedback from multiple samples simultaneously
Achieves faster convergence with enhanced theoretical guarantees
🔎 Similar Papers
No similar papers found.