Actor-Accelerated Policy Dual Averaging for Reinforcement Learning in Continuous Action Spaces

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational inefficiency of Policy Dual Averaging (PDA) in continuous control, where each iteration requires solving an expensive optimization subproblem, hindering practical deployment. To overcome this limitation, we propose the first integration of a learnable policy network into the PDA framework, leveraging function approximation to efficiently solve the subproblem and substantially accelerate decision-making. Theoretical analysis demonstrates that the proposed method preserves convergence guarantees under bounded approximation error. Empirical evaluations on robotic control and operations research benchmark tasks show that our approach consistently outperforms mainstream on-policy algorithms such as Proximal Policy Optimization (PPO), effectively bridging the gap between PDA’s theoretical advantages and real-world applicability.

Technology Category

Application Category

📝 Abstract
Policy Dual Averaging (PDA) offers a principled Policy Mirror Descent (PMD) framework that more naturally admits value function approximation than standard PMD, enabling the use of approximate advantage (or Q-) functions while retaining strong convergence guarantees. However, applying PDA in continuous state and action spaces remains computationally challenging, since action selection involves solving an optimization sub-problem at each decision step. In this paper, we propose \textit{actor-accelerated PDA}, which uses a learned policy network to approximate the solution of the optimization sub-problems, yielding faster runtimes while maintaining convergence guarantees. We provide a theoretical analysis that quantifies how actor approximation error impacts the convergence of PDA under suitable assumptions. We then evaluate its performance on several benchmarks in robotics, control, and operations research problems. Actor-accelerated PDA achieves superior performance compared to popular on-policy baselines such as Proximal Policy Optimization (PPO). Overall, our results bridge the gap between the theoretical advantages of PDA and its practical deployment in continuous-action problems with function approximation.
Problem

Research questions and friction points this paper is trying to address.

Policy Dual Averaging
continuous action spaces
optimization sub-problem
computational challenge
reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Policy Dual Averaging
actor-accelerated
continuous action spaces
function approximation
convergence guarantees
🔎 Similar Papers
No similar papers found.
Ji Gao
Ji Gao
PhD, Meta Platforms Inc
SecurityPrivacyMachine LearningDeep Learning
C
Caleb Ju
H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, GA, USA
G
Guanghui Lan
H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, GA, USA
Zhaohui Tong
Zhaohui Tong
Associate Professor, Georgia Institute of Technology
Sustainable MaterialsGreen ChemistryWaste ValorizationProcess ControlLignocellulose