Dynamic Obstacle Avoidance with Bounded Rationality Adversarial Reinforcement Learning

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address robust navigation for quadrupedal robots in unknown dynamic obstacle environments, this paper proposes a hierarchical adversarial reinforcement learning framework. Methodologically, it models dynamic obstacles as bounded-rational adversarial agents constrained by Quantal Response Equilibrium (QRE) to ensure behavioral plausibility; further, it introduces a progressive rationality curriculum that incrementally increases obstacle agents’ rationality during training to enhance policy generalization and training stability. This work establishes the first rationality-constrained adversarial RL paradigm for legged locomotion. Evaluated in randomized multi-obstacle mazes on the Unitree GO1 simulation platform, the method achieves zero-shot, highly robust navigation—improving task success rate by 37% over baseline methods. Crucially, policies trained solely in simulation transfer seamlessly to the physical GO1 robot without fine-tuning, demonstrating strong sim-to-real generalization. The framework thus bridges theoretical rigor in adversarial modeling with practical deployment robustness for dynamic navigation.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning (RL) has proven largely effective in obtaining stable locomotion gaits for legged robots. However, designing control algorithms which can robustly navigate unseen environments with obstacles remains an ongoing problem within quadruped locomotion. To tackle this, it is convenient to solve navigation tasks by means of a hierarchical approach with a low-level locomotion policy and a high-level navigation policy. Crucially, the high-level policy needs to be robust to dynamic obstacles along the path of the agent. In this work, we propose a novel way to endow navigation policies with robustness by a training process that models obstacles as adversarial agents, following the adversarial RL paradigm. Importantly, to improve the reliability of the training process, we bound the rationality of the adversarial agent resorting to quantal response equilibria, and place a curriculum over its rationality. We called this method Hierarchical policies via Quantal response Adversarial Reinforcement Learning (Hi-QARL). We demonstrate the robustness of our method by benchmarking it in unseen randomized mazes with multiple obstacles. To prove its applicability in real scenarios, our method is applied on a Unitree GO1 robot in simulation.
Problem

Research questions and friction points this paper is trying to address.

Robust navigation in dynamic environments with obstacles
Hierarchical control for legged robot locomotion
Adversarial training with bounded rationality for reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical policies with low and high-level control
Adversarial RL with bounded rationality obstacles
Quantal response equilibria for reliable training
🔎 Similar Papers
No similar papers found.
J
Jose-Luis Holgado-Alvarez
Center for Artificial Intelligence and Data Science, University of Würzburg, Germany
A
Aryaman Reddi
Technische Universitat Darmstadt, Germany; Hessian.ai, Germany
Carlo D'Eramo
Carlo D'Eramo
Professor of Reinforcement Learning @ University of Würzburg | Group leader @ TU Darmstadt
Reinforcement LearningDeep LearningMulti-Task LearningTransfer LearningMulti-Agent