How do simple rotations affect the implicit bias of Adam?

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies an implicit bias degradation issue in adaptive optimizers (e.g., Adam) under feature-space rotations: small orthogonal transformations disrupt their inherent “richness bias”, causing convergence to linear—rather than Bayes-optimal nonlinear—decision boundaries, and resulting in worse generalization than SGD. To address this deficiency, we propose orthogonal reparameterization, which renders the optimization process equivariant to data rotations. Theoretically and empirically, this method restores Adam’s preference for complex decision boundaries, significantly improving convergence stability and generalization on rotated data. Crucially, it achieves, for the first time, implicit bias robustness of adaptive optimizers under arbitrary feature-space transformations.

Technology Category

Application Category

📝 Abstract
Adaptive gradient methods such as Adam and Adagrad are widely used in machine learning, yet their effect on the generalization of learned models -- relative to methods like gradient descent -- remains poorly understood. Prior work on binary classification suggests that Adam exhibits a ``richness bias,'' which can help it learn nonlinear decision boundaries closer to the Bayes-optimal decision boundary relative to gradient descent. However, the coordinate-wise preconditioning scheme employed by Adam renders the overall method sensitive to orthogonal transformations of feature space. We show that this sensitivity can manifest as a reversal of Adam's competitive advantage: even small rotations of the underlying data distribution can make Adam forfeit its richness bias and converge to a linear decision boundary that is farther from the Bayes-optimal decision boundary than the one learned by gradient descent. To alleviate this issue, we show that a recently proposed reparameterization method -- which applies an orthogonal transformation to the optimization objective -- endows any first-order method with equivariance to data rotations, and we empirically demonstrate its ability to restore Adam's bias towards rich decision boundaries.
Problem

Research questions and friction points this paper is trying to address.

Adam's sensitivity to data rotations affects its implicit bias
Small orthogonal transformations reverse Adam's competitive advantage
Reparameterization method restores Adam's bias toward rich boundaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Orthogonal transformation reparameterization for rotation equivariance
Restores Adam's bias towards rich decision boundaries
Enhances first-order methods' robustness to data rotations
🔎 Similar Papers