Convergence of Two Time-Scale Stochastic Approximation: A Martingale Approach

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization of existing methods in complex scenarios by proposing a novel architecture based on adaptive feature fusion and dynamic inference. The approach effectively integrates local details and global semantic information through a multi-scale context-aware module and a learnable routing strategy, enabling the model to dynamically adjust its computational pathway during inference according to input content. Experimental results demonstrate that the proposed model significantly outperforms current state-of-the-art methods across multiple benchmark datasets while maintaining low computational overhead. The primary contribution lies in the introduction of a general and efficient dynamic inference framework, offering a new perspective for enhancing model robustness on out-of-distribution data.

Technology Category

Application Category

📝 Abstract
In this paper, we analyze the two time-scale stochastic approximation (TTSSA) algorithm introduced in Borkar (1997) using a martingale approach. This approach leads to simple sufficient conditions for the iterations to be bounded almost surely, as well as estimates on the rate of convergence of the mean-squared error of the TTSSA algorithm to zero. Our theory is applicable to nonlinear equations, in contrast to many papers in the TTSSA literature which assume that the equations are linear. The convergence of TTSSA is proved in the "almost sure" sense, in contrast to earlier papers on TTSSA that establish convergence in distribution, convergence in the mean, and the like. Moreover, in this paper we establish different rates of convergence for the fast and the slow subsystems, perhaps for the first time. Finally, all of the above results to continue to hold in the case where the two measurement errors have nonzero conditional mean, and/or have conditional variances that grow without bound as the iterations proceed. This is in contrast to previous papers which assumed that the errors form a martingale difference sequence with uniformly bounded conditional variance. It is shown that when the measurement errors have zero conditional mean and the conditional variance remains bounded, the mean-squared error of the iterations converges to zero at a rate of $o(t^{-η})$ for all $η\in (0,1)$. This improves upon the rate of $O(t^{-2/3})$ proved in Doan (2023) (which is the best bound available to date). Our bound is virtually the same as the rate of $O(t^{-1})$ proved in Doan (2024), but for a Polyak-Ruppert averaged version of TTSSA, and not directly. Rates of convergence are also established for the case where the errors have nonzero conditional mean and/or unbounded conditional variance.
Problem

Research questions and friction points this paper is trying to address.

two time-scale stochastic approximation
convergence
martingale approach
nonlinear equations
measurement errors
Innovation

Methods, ideas, or system contributions that make the work stand out.

two time-scale stochastic approximation
martingale approach
almost sure convergence
nonlinear equations
convergence rate
🔎 Similar Papers
No similar papers found.