π€ AI Summary
This work addresses the instability of reinforcement learning agents under environmental disturbances and model uncertainty by proposing a Minimax Deep Deterministic Policy Gradient (MMDDPG) framework. The approach formulates policy learning as a two-player zero-sum game between the agentβs policy and an adversarial perturbation policy, optimizing them in a minimax fashion. To prevent overly aggressive perturbations that could hinder learning, a fractional objective function is introduced to balance task performance against perturbation magnitude. This mechanism enhances training stability and improves policy robustness. Evaluated on continuous control tasks in MuJoCo, MMDDPG significantly outperforms baseline methods and demonstrates strong resilience to both dynamic disturbances and parametric variations.
π Abstract
Reinforcement learning (RL) has achieved remarkable success in a wide range of control and decision-making tasks. However, RL agents often exhibit unstable or degraded performance when deployed in environments subject to unexpected external disturbances and model uncertainties. Consequently, ensuring reliable performance under such conditions remains a critical challenge. In this paper, we propose minimax deep deterministic policy gradient (MMDDPG), a framework for learning disturbance-resilient policies in continuous control tasks. The training process is formulated as a minimax optimization problem between a user policy and an adversarial disturbance policy. In this problem, the user learns a robust policy that minimizes the objective function, while the adversary generates disturbances that maximize it. To stabilize this interaction, we introduce a fractional objective that balances task performance and disturbance magnitude. This objective prevents excessively aggressive disturbances and promotes robust learning. Experimental evaluations in MuJoCo environments demonstrate that the proposed MMDDPG achieves significantly improved robustness against both external force perturbations and model parameter variations.