$O(1/k)$ Finite-Time Bound for Non-Linear Two-Time-Scale Stochastic Approximation

📅 2025-04-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the slow convergence rate of nonlinear two-timescale stochastic approximation algorithms, whose best existing mean-square error bound is merely $O(1/k^{2/3})$, substantially worse than the $O(1/k)$ rate achievable in linear settings. We establish the first tight finite-time convergence bound achieving the optimal $O(1/k)$ rate. Focusing on canonical nonlinear scenarios—including gradient descent-ascent and two-timescale Lagrangian optimization—we develop a novel analytical framework that integrates noise averaging with mathematical induction. Under standard assumptions, this framework enables the first convergence-rate breakthrough under nonlinear contraction conditions. Our result fills a fundamental theoretical gap and significantly improves the precision and reliability of convergence characterizations for algorithms widely used in reinforcement learning and constrained optimization.

Technology Category

Application Category

📝 Abstract
Two-time-scale stochastic approximation is an algorithm with coupled iterations which has found broad applications in reinforcement learning, optimization and game control. While several prior works have obtained a mean square error bound of $O(1/k)$ for linear two-time-scale iterations, the best known bound in the non-linear contractive setting has been $O(1/k^{2/3})$. In this work, we obtain an improved bound of $O(1/k)$ for non-linear two-time-scale stochastic approximation. Our result applies to algorithms such as gradient descent-ascent and two-time-scale Lagrangian optimization. The key step in our analysis involves rewriting the original iteration in terms of an averaged noise sequence which decays sufficiently fast. Additionally, we use an induction-based approach to show that the iterates are bounded in expectation.
Problem

Research questions and friction points this paper is trying to address.

Improving convergence bound for non-linear two-time-scale stochastic approximation
Extending O(1/k) bound from linear to non-linear contractive settings
Analyzing algorithms like gradient descent-ascent via noise sequence averaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improved $O(1/k)$ bound for non-linear iterations
Rewriting iteration with fast-decaying averaged noise
Induction-based approach for bounded iterates
🔎 Similar Papers
No similar papers found.