🤖 AI Summary
This work addresses the poor sample efficiency of single-timescale Actor-Critic algorithms in infinite-horizon discounted Markov decision processes (MDPs). To mitigate the high variance in Critic updates caused by non-stationary sampling distributions, the authors propose a novel approach that integrates STORM-based stochastic recursive momentum with a lightweight uniform replay buffer. This design achieves the first provably optimal sample complexity for a practical single-timescale algorithm. Specifically, in finite-state-action discounted MDPs, the method improves the sample complexity required to attain an ε-optimal policy from O(ε⁻³) to O(ε⁻²), while maintaining algorithmic simplicity and practicality.
📝 Abstract
We establish an optimal sample complexity of $O(\epsilon^{-2})$ for obtaining an $\epsilon$-optimal global policy using a single-timescale actor-critic (AC) algorithm in infinite-horizon discounted Markov decision processes (MDPs) with finite state-action spaces, improving upon the prior state of the art of $O(\epsilon^{-3})$. Our approach applies STORM (STOchastic Recursive Momentum) to reduce variance in the critic updates. However, because samples are drawn from a nonstationary occupancy measure induced by the evolving policy, variance reduction via STORM alone is insufficient. To address this challenge, we maintain a buffer of small fraction of recent samples and uniformly sample from it for each critic update. Importantly, these mechanisms are compatible with existing deep learning architectures and require only minor modifications, without compromising practical applicability.