🤖 AI Summary
This work addresses the lack of theoretical foundations for heterogeneous (asymmetric) Actor-Critic algorithms in partially observable Markov decision processes (POMDPs). Methodologically, we propose and rigorously analyze a learning paradigm wherein the actor and critic employ distinct state representations—i.e., an asymmetric architecture—under linear function approximation. Theoretically, we establish the first finite-time convergence analysis framework for such asymmetric Actor-Critic methods in POMDPs, proving that the structural asymmetry systematically eliminates bias induced by state aliasing. We derive tight finite-time error bounds characterizing both estimation accuracy and convergence rate. Our results demonstrate that the heterogeneous design strictly outperforms conventional symmetric architectures in both estimation precision and convergence speed. This provides the first rigorous theoretical justification and quantitative performance guarantee for asymmetric Actor-Critic methods in POMDP settings.
📝 Abstract
In reinforcement learning for partially observable environments, many successful algorithms were developed within the asymmetric learning paradigm. This paradigm leverages additional state information available at training time for faster learning. Although the proposed learning objectives are usually theoretically sound, these methods still lack a theoretical justification for their potential benefits. We propose such a justification for asymmetric actor-critic algorithms with linear function approximators by adapting a finite-time convergence analysis to this setting. The resulting finite-time bound reveals that the asymmetric critic eliminates an error term arising from aliasing in the agent state.