Non-Expansive Mappings in Two-Time-Scale Stochastic Approximation: Finite-Time Analysis

📅 2025-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies the finite-time convergence of two-timescale stochastic approximation algorithms when the slow-scale mapping is nonexpansive—relaxing the classical contraction requirement. The absence of contraction in the slow-variable update undermines standard stability and convergence analyses. To address this, we establish, for the first time, a finite-time error bound of order $O(1/k^{1/4-varepsilon})$ under nonexpansive slow dynamics and prove almost-sure convergence of the iterate sequence to the fixed-point set. Our analysis integrates stochastic approximation theory, Krasnoselskii–Mann iteration techniques, and refined mean-square error estimation. This result breaks the traditional double-contraction assumption, achieves the current best-known algebraic decay rate, and extends to applications including minimax optimization, linear stochastic approximation, and Lagrangian dual learning.

Technology Category

Application Category

📝 Abstract
Two-time-scale stochastic approximation is an iterative algorithm used in applications such as optimization, reinforcement learning, and control. Finite-time analysis of these algorithms has primarily focused on fixed point iterations where both time-scales have contractive mappings. In this paper, we study two-time-scale iterations, where the slower time-scale has a non-expansive mapping. For such algorithms, the slower time-scale can be considered a stochastic inexact Krasnoselskii-Mann iteration. We show that the mean square error decays at a rate $O(1/k^{1/4-epsilon})$, where $epsilon>0$ is arbitrarily small. We also show almost sure convergence of iterates to the set of fixed points. We show the applicability of our framework by applying our results to minimax optimization, linear stochastic approximation, and Lagrangian optimization.
Problem

Research questions and friction points this paper is trying to address.

Stable Convergence
Dual Timescale Learning
Optimization and Control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-timescale Stochastic Approximation
Fixed-point Iteration Breakthrough
O(1/k^(1/4-ε)) Convergence Rate
🔎 Similar Papers
No similar papers found.