🤖 AI Summary
This work addresses the computational inefficiency of Policy Dual Averaging (PDA) in continuous control, where each iteration requires solving an expensive optimization subproblem, hindering practical deployment. To overcome this limitation, we propose the first integration of a learnable policy network into the PDA framework, leveraging function approximation to efficiently solve the subproblem and substantially accelerate decision-making. Theoretical analysis demonstrates that the proposed method preserves convergence guarantees under bounded approximation error. Empirical evaluations on robotic control and operations research benchmark tasks show that our approach consistently outperforms mainstream on-policy algorithms such as Proximal Policy Optimization (PPO), effectively bridging the gap between PDA’s theoretical advantages and real-world applicability.
📝 Abstract
Policy Dual Averaging (PDA) offers a principled Policy Mirror Descent (PMD) framework that more naturally admits value function approximation than standard PMD, enabling the use of approximate advantage (or Q-) functions while retaining strong convergence guarantees. However, applying PDA in continuous state and action spaces remains computationally challenging, since action selection involves solving an optimization sub-problem at each decision step. In this paper, we propose \textit{actor-accelerated PDA}, which uses a learned policy network to approximate the solution of the optimization sub-problems, yielding faster runtimes while maintaining convergence guarantees. We provide a theoretical analysis that quantifies how actor approximation error impacts the convergence of PDA under suitable assumptions. We then evaluate its performance on several benchmarks in robotics, control, and operations research problems. Actor-accelerated PDA achieves superior performance compared to popular on-policy baselines such as Proximal Policy Optimization (PPO). Overall, our results bridge the gap between the theoretical advantages of PDA and its practical deployment in continuous-action problems with function approximation.