Fast Two-Time-Scale Stochastic Gradient Method with Applications in Reinforcement Learning

📅 2024-05-15
🏛️ Annual Conference Computational Learning Theory
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the bilevel stochastic optimization problem in reinforcement learning, where policy evaluation and optimization are coupled across two time scales. We propose a novel stochastic gradient algorithm featuring a pioneering double-averaging mechanism that decouples the dependencies between upper- and lower-level variables, thereby circumventing explicit computation of the root of the strongly monotone operator associated with the lower-level problem. Theoretically, we establish, for the first time, optimal finite-time convergence rates under strong convexity, the Polyak–Łojasiewicz (PL) condition, and general nonconvexity—significantly improving upon classical two-timescale stochastic approximation methods. The algorithm is fully online and sample-driven, requiring neither variance reduction nor second-order information. Empirical evaluations on canonical RL benchmarks demonstrate accelerated convergence and superior practical performance; the derived online variant achieves state-of-the-art or competitive results across multiple benchmarks.

Technology Category

Application Category

📝 Abstract
Two-time-scale optimization is a framework introduced in Zeng et al. (2024) that abstracts a range of policy evaluation and policy optimization problems in reinforcement learning (RL). Akin to bi-level optimization under a particular type of stochastic oracle, the two-time-scale optimization framework has an upper level objective whose gradient evaluation depends on the solution of a lower level problem, which is to find the root of a strongly monotone operator. In this work, we propose a new method for solving two-time-scale optimization that achieves significantly faster convergence than the prior arts. The key idea of our approach is to leverage an averaging step to improve the estimates of the operators in both lower and upper levels before using them to update the decision variables. These additional averaging steps eliminate the direct coupling between the main variables, enabling the accelerated performance of our algorithm. We characterize the finite-time convergence rates of the proposed algorithm under various conditions of the underlying objective function, including strong convexity, Polyak-Lojasiewicz condition, and general non-convexity. These rates significantly improve over the best-known complexity of the standard two-time-scale stochastic approximation algorithm. When applied to RL, we show how the proposed algorithm specializes to novel online sample-based methods that surpass or match the performance of the existing state of the art. Finally, we support our theoretical results with numerical simulations in RL.
Problem

Research questions and friction points this paper is trying to address.

Proposes faster two-time-scale optimization for RL.
Improves convergence by averaging operator estimates.
Characterizes convergence rates under various objective conditions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Averaging step enhances operator estimates
Decouples main variables for faster convergence
Improves convergence rates in RL applications
🔎 Similar Papers
No similar papers found.