OptEMA: Adaptive Exponential Moving Average for Stochastic Optimization with Zero-Noise Optimality

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a multimodal learning framework based on adaptive context fusion to address the limited generalization of existing methods in complex scenarios. The approach dynamically aligns visual and linguistic features and incorporates a lightweight gating mechanism to enable efficient cross-modal integration. Experimental results demonstrate that the model significantly outperforms current state-of-the-art methods across multiple benchmark datasets, exhibiting notably enhanced robustness under low-resource conditions and in the presence of noise. The primary contribution of this study lies in the design of a scalable fusion architecture that offers a novel technical pathway for multimodal representation learning.

Technology Category

Application Category

📝 Abstract
The Exponential Moving Average (EMA) is a cornerstone of widely used optimizers such as Adam. However, existing theoretical analyses of Adam-style methods have notable limitations: their guarantees can remain suboptimal in the zero-noise regime, rely on restrictive boundedness conditions (e.g., bounded gradients or objective gaps), use constant or open-loop stepsizes, or require prior knowledge of Lipschitz constants. To overcome these bottlenecks, we introduce OptEMA and analyze two novel variants: OptEMA-M, which applies an adaptive, decreasing EMA coefficient to the first-order moment with a fixed second-order decay, and OptEMA-V, which swaps these roles. Crucially, OptEMA is closed-loop and Lipschitz-free in the sense that its effective stepsizes are trajectory-dependent and do not require the Lipschitz constant for parameterization. Under standard stochastic gradient descent (SGD) assumptions, namely smoothness, a lower-bounded objective, and unbiased gradients with bounded variance, we establish rigorous convergence guarantees. Both variants achieve a noise-adaptive convergence rate of $\widetilde{\mathcal{O}}(T^{-1/2}+σ^{1/2} T^{-1/4})$ for the average gradient norm, where $σ$ is the noise level. In particular, in the zero-noise regime where $σ=0$, our bounds reduce to the nearly optimal deterministic rate $\widetilde{\mathcal{O}}(T^{-1/2})$ without manual hyperparameter retuning.
Problem

Research questions and friction points this paper is trying to address.

Exponential Moving Average
Stochastic Optimization
Zero-Noise Optimality
Convergence Guarantee
Lipschitz Constant
Innovation

Methods, ideas, or system contributions that make the work stand out.

adaptive EMA
zero-noise optimality
Lipschitz-free optimization
closed-loop stepsize
noise-adaptive convergence
🔎 Similar Papers
No similar papers found.