🤖 AI Summary
This work addresses the poorly understood implicit bias of the Adam optimizer under small-batch (especially single-sample) training, focusing on linearly separable logistic regression. We systematically analyze the convergence behavior of incremental Adam via dynamic modeling in the β₂ → 1 limit, construct a surrogate algorithm, and introduce a data-dependent dual fixed-point characterization. We rigorously prove that, on certain structured datasets, incremental Adam converges to the ℓ₂ maximum-margin classifier, whereas Signum invariably converges to the ℓ∞ maximum-margin solution. These results reveal the decisive role of batch-processing strategy in shaping optimizer implicit bias, provide the first theoretical characterization of Adam’s batch sensitivity, and establish a new analytical paradigm for understanding the generalization preferences of adaptive optimizers.
📝 Abstract
Adam [Kingma and Ba, 2015] is the de facto optimizer in deep learning, yet its theoretical understanding remains limited. Prior analyses show that Adam favors solutions aligned with $ell_infty$-geometry, but these results are restricted to the full-batch regime. In this work, we study the implicit bias of incremental Adam (using one sample per step) for logistic regression on linearly separable data, and we show that its bias can deviate from the full-batch behavior. To illustrate this, we construct a class of structured datasets where incremental Adam provably converges to the $ell_2$-max-margin classifier, in contrast to the $ell_infty$-max-margin bias of full-batch Adam. For general datasets, we develop a proxy algorithm that captures the limiting behavior of incremental Adam as $β_2 o 1$ and we characterize its convergence direction via a data-dependent dual fixed-point formulation. Finally, we prove that, unlike Adam, Signum [Bernstein et al., 2018] converges to the $ell_infty$-max-margin classifier for any batch size by taking $β$ close enough to 1. Overall, our results highlight that the implicit bias of Adam crucially depends on both the batching scheme and the dataset, while Signum remains invariant.