🤖 AI Summary
Off-policy actor-critic methods suffer from two key challenges: (i) unstable value estimation due to the “deadly triad” and non-stationary target policies (“moving targets”), and (ii) slow convergence caused by biased off-policy policy gradient estimates. This paper proposes a functional critic modeling framework that unifies value evaluation and gradient computation within a shared function space, enabling the first provably convergent off-policy, goal-directed actor-critic algorithm. Leveraging linear function approximation theory, we design a neural network architecture capable of exact off-policy policy gradient estimation. We establish rigorous convergence guarantees under linear function approximation and empirically validate our approach on the DeepMind Control Suite, demonstrating substantial improvements in both sample efficiency and training stability over state-of-the-art baselines.
📝 Abstract
Off-policy reinforcement learning (RL) with function approximation offers an effective way to improve sample efficiency by reusing past experience. Within this setting, the actor-critic (AC) framework has achieved strong empirical success. However, both the critic and actor learning is challenging for the off-policy AC methods: first of all, in addition to the classic "deadly triad" instability of off-policy evaluation, it also suffers from a "moving target" problem, where the policy being evaluated changes continually; secondly, actor learning becomes less efficient due to the difficulty of estimating the exact off-policy policy gradient. The first challenge essentially reduces the problem to repeatedly performing off-policy evaluation for changing policies. For the second challenge, the off-policy policy gradient theorem requires a complex and often impractical algorithm to estimate an additional emphasis critic, which is typically neglected in practice, thereby reducing to the on-policy policy gradient as an approximation. In this work, we introduce a novel concept of functional critic modeling, which leads to a new AC framework that addresses both challenges for actor-critic learning under the deadly triad setting. We provide a theoretical analysis in the linear function setting, establishing the provable convergence of our framework, which, to the best of our knowledge, is the first convergent off-policy target-based AC algorithm. From a practical perspective, we further propose a carefully designed neural network architecture for the functional critic modeling and demonstrate its effectiveness through preliminary experiments on widely used RL tasks from the DeepMind Control Benchmark.