🤖 AI Summary
This work establishes the global convergence of single-timescale Actor-Critic algorithms for infinite-horizon discounted Markov decision processes with finite states. Addressing the conventional two-timescale decaying step-size paradigm, we propose a novel design wherein the critic employs a *constant* step size while only the actor uses a decaying step size, and formulate an asynchronous two-timescale update model. Leveraging tools from stochastic nonconvex optimization and the gradient-dominance lemma, we prove global convergence in expectation. Crucially, we derive the first theoretically guaranteed sample complexity of $mathcal{O}(varepsilon^{-3})$ for Actor-Critic methods—improving upon the prior best global rate of $mathcal{O}(varepsilon^{-4})$ by one order—and thereby provide the first rigorous theoretical foundation for the practical use of constant-step-size critics.
📝 Abstract
In this paper, we establish the global convergence of the actor-critic algorithm with a significantly improved sample complexity of $O(epsilon^{-3})$, advancing beyond the existing local convergence results. Previous works provide local convergence guarantees with a sample complexity of $O(epsilon^{-2})$ for bounding the squared gradient of the return, which translates to a global sample complexity of $O(epsilon^{-4})$ using the gradient domination lemma. In contrast to traditional methods that employ decreasing step sizes for both the actor and critic, we demonstrate that a constant step size for the critic is sufficient to ensure convergence in expectation. This key insight reveals that using a decreasing step size for the actor alone is sufficient to handle the noise for both the actor and critic. Our findings provide theoretical support for the practical success of many algorithms that rely on constant step sizes.