🤖 AI Summary
This work addresses the challenge of enabling quadrupedal robots to achieve safe, agile navigation in densely cluttered environments while maintaining high training efficiency. To this end, the authors propose the SEA-Nav framework, which integrates differentiable control barrier function (CBF)-based safety constraints, an adaptive collision replay mechanism, risk-aware exploration rewards, and kinematic action constraints within a reinforcement learning paradigm. This unified approach effectively balances safety guarantees with exploration efficiency during policy learning. Notably, SEA-Nav achieves, for the first time on a real quadrupedal robot, successful navigation through highly complex obstacle courses after only minutes of training, substantially improving both sample efficiency and deployment safety compared to prior methods.
📝 Abstract
Efficiently training quadruped robot navigation in densely cluttered environments remains a significant challenge. Existing methods are either limited by a lack of safety and agility in simple obstacle distributions or suffer from slow locomotion in complex environments, often requiring excessively long training phases. To this end, we propose SEA-Nav (Safe, Efficient, and Agile Navigation), a reinforcement learning framework for quadruped navigation. Within diverse and dense obstacle environments, a differentiable control barrier function (CBF)-based shield constraints the navigation policy to output safe velocity commands. An adaptive collision replay mechanism and hazardous exploration rewards are introduced to increase the probability of learning from critical experiences, guiding efficient exploration and exploitation. Finally, kinematic action constraints are incorporated to ensure safe velocity commands, facilitating successful physical deployment. To the best of our knowledge, this is the first approach that achieves highly challenging quadruped navigation in the real world with minute-level training time.