🤖 AI Summary
Deterministic Policy Gradient (DPG) algorithms rely on precise gradient estimates of the action-value function provided by the critic; however, under function approximation, such gradients are prone to bias and estimation error, leading to unstable policy updates. To address this, we propose Zeroth-Order Deterministic Policy Gradient (ZODPG), the first method to introduce two-point stochastic gradient estimation directly in action space—thereby eliminating explicit differentiation of the action-value function and theoretically ensuring compatibility of policy updates. By integrating zeroth-order optimization into the actor-critic framework, ZODPG avoids gradient approximation errors inherent in first-order DPG variants. Empirical evaluation across multiple continuous-control benchmark tasks demonstrates that ZODPG significantly reduces gradient estimation error, achieves more robust convergence, and outperforms existing state-of-the-art methods in both sample efficiency and final performance.
📝 Abstract
Deterministic policy gradient algorithms are foundational for actor-critic methods in controlling continuous systems, yet they often encounter inaccuracies due to their dependence on the derivative of the critic's value estimates with respect to input actions. This reliance requires precise action-value gradient computations, a task that proves challenging under function approximation. We introduce an actor-critic algorithm that bypasses the need for such precision by employing a zeroth-order approximation of the action-value gradient through two-point stochastic gradient estimation within the action space. This approach provably and effectively addresses compatibility issues inherent in deterministic policy gradient schemes. Empirical results further demonstrate that our algorithm not only matches but frequently exceeds the performance of current state-of-the-art methods by a substantial extent.