🤖 AI Summary
To address the curse of dimensionality arising from non-stationary policies in partially observable Markov decision processes (POMDPs), this paper proposes a natural policy gradient algorithm based on recurrent neural networks (RNNs), unifying policy parameterization and policy evaluation. Theoretically, we establish, for the first time, finite-width and finite-horizon convergence guarantees for this method in a neighborhood of initialization. We derive tight upper bounds: network width scales as $O(1/varepsilon^2)$ and sample complexity as $O(1/varepsilon^4)$ to achieve $varepsilon$-accuracy. Furthermore, we characterize the efficiency of RNN-based learning in short-memory POMDPs and identify fundamental challenges posed by long-term dependencies. Our results provide the first rigorous convergence guarantee and quantifiable performance bounds for RNNs in non-Markovian reinforcement learning, bridging a critical theoretical gap in deep RL for partially observable environments.
📝 Abstract
In this paper, we study a natural policy gradient method based on recurrent neural networks (RNNs) for partially-observable Markov decision processes, whereby RNNs are used for policy parameterization and policy evaluation to address curse of dimensionality in non-Markovian reinforcement learning. We present finite-time and finite-width analyses for both the critic (recurrent temporal difference learning), and correspondingly-operated recurrent natural policy gradient method in the near-initialization regime. Our analysis demonstrates the efficiency of RNNs for problems with short-term memory with explicit bounds on the required network widths and sample complexity, and points out the challenges in the case of long-term dependencies.