Action Robust Reinforcement Learning via Optimal Adversary Aware Policy Optimization

📅 2025-07-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of reinforcement learning (RL) policies under action perturbations, this paper proposes Optimal Adversary–Aware Policy Iteration (OA-PI). OA-PI explicitly models the optimal adversarial perturbation policy and integrates robustness optimization directly into the policy iteration process, maintaining compatibility with mainstream algorithms—including TD3 and PPO—without architectural modifications. Theoretically, OA-PI establishes convergence guarantees through coupled analysis of adversarial optimization and policy iteration. Empirically, it demonstrates significant robustness improvements (average +32.7%) across multiple continuous-control benchmarks under diverse action perturbations, while preserving original task performance (performance degradation <1.5%) and sample efficiency (training steps increase <5%). Its core contribution lies in the first unified formulation of differentiable optimal adversary modeling within policy iteration, enabling efficient, plug-and-play robust RL.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning (RL) has achieved remarkable success in sequential decision tasks. However, recent studies have revealed the vulnerability of RL policies to different perturbations, raising concerns about their effectiveness and safety in real-world applications. In this work, we focus on the robustness of RL policies against action perturbations and introduce a novel framework called Optimal Adversary-aware Policy Iteration (OA-PI). Our framework enhances action robustness under various perturbations by evaluating and improving policy performance against the corresponding optimal adversaries. Besides, our approach can be integrated into mainstream DRL algorithms such as Twin Delayed DDPG (TD3) and Proximal Policy Optimization (PPO), improving action robustness effectively while maintaining nominal performance and sample efficiency. Experimental results across various environments demonstrate that our method enhances robustness of DRL policies against different action adversaries effectively.
Problem

Research questions and friction points this paper is trying to address.

Enhance RL policy robustness against action perturbations
Integrate robustness into mainstream DRL algorithms effectively
Maintain nominal performance while improving action robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal Adversary-aware Policy Iteration framework
Integrates with TD3 and PPO algorithms
Enhances robustness against action perturbations
🔎 Similar Papers
No similar papers found.