Taming the Adversary: Stable Minimax Deep Deterministic Policy Gradient via Fractional Objectives

πŸ“… 2026-03-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the instability of reinforcement learning agents under environmental disturbances and model uncertainty by proposing a Minimax Deep Deterministic Policy Gradient (MMDDPG) framework. The approach formulates policy learning as a two-player zero-sum game between the agent’s policy and an adversarial perturbation policy, optimizing them in a minimax fashion. To prevent overly aggressive perturbations that could hinder learning, a fractional objective function is introduced to balance task performance against perturbation magnitude. This mechanism enhances training stability and improves policy robustness. Evaluated on continuous control tasks in MuJoCo, MMDDPG significantly outperforms baseline methods and demonstrates strong resilience to both dynamic disturbances and parametric variations.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement learning (RL) has achieved remarkable success in a wide range of control and decision-making tasks. However, RL agents often exhibit unstable or degraded performance when deployed in environments subject to unexpected external disturbances and model uncertainties. Consequently, ensuring reliable performance under such conditions remains a critical challenge. In this paper, we propose minimax deep deterministic policy gradient (MMDDPG), a framework for learning disturbance-resilient policies in continuous control tasks. The training process is formulated as a minimax optimization problem between a user policy and an adversarial disturbance policy. In this problem, the user learns a robust policy that minimizes the objective function, while the adversary generates disturbances that maximize it. To stabilize this interaction, we introduce a fractional objective that balances task performance and disturbance magnitude. This objective prevents excessively aggressive disturbances and promotes robust learning. Experimental evaluations in MuJoCo environments demonstrate that the proposed MMDDPG achieves significantly improved robustness against both external force perturbations and model parameter variations.
Problem

Research questions and friction points this paper is trying to address.

robustness
adversarial disturbance
model uncertainty
reinforcement learning
minimax optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

minimax optimization
fractional objective
robust reinforcement learning
adversarial disturbance
deep deterministic policy gradient
πŸ”Ž Similar Papers
No similar papers found.