REASAN: Learning Reactive Safe Navigation for Legged Robots

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Legged robots face significant challenges in achieving lightweight, real-time, reactive safe navigation in complex dynamic environments using only a single onboard LiDAR. Method: This paper proposes a fully onboard, modular end-to-end architecture featuring a novel four-module decoupled design: three independent reinforcement learning policies—locomotion control, safety protection, and navigation decision-making—coordinated with a Transformer-based exteroceptive estimator. The approach eliminates heuristic rules and policy-switching mechanisms. Training leverages reward shaping and curriculum learning, augmented by sim-to-real transfer. Contributions/Results: Experiments demonstrate millisecond-level reaction times and high robustness in both single- and multi-robot scenarios under full onboard operation. Ablation studies confirm the necessity of each module, and the system outperforms state-of-the-art methods. The implementation is open-sourced.

Technology Category

Application Category

📝 Abstract
We present a novel modularized end-to-end framework for legged reactive navigation in complex dynamic environments using a single light detection and ranging (LiDAR) sensor. The system comprises four simulation-trained modules: three reinforcement-learning (RL) policies for locomotion, safety shielding, and navigation, and a transformer-based exteroceptive estimator that processes raw point-cloud inputs. This modular decomposition of complex legged motor-control tasks enables lightweight neural networks with simple architectures, trained using standard RL practices with targeted reward shaping and curriculum design, without reliance on heuristics or sophisticated policy-switching mechanisms. We conduct comprehensive ablations to validate our design choices and demonstrate improved robustness compared to existing approaches in challenging navigation tasks. The resulting reactive safe navigation (REASAN) system achieves fully onboard and real-time reactive navigation across both single- and multi-robot settings in complex environments. We release our training and deployment code at https://github.com/ASIG-X/REASAN.
Problem

Research questions and friction points this paper is trying to address.

Develops a modular framework for legged robot navigation in dynamic environments
Enables real-time reactive navigation using lightweight neural networks and LiDAR
Improves robustness in complex settings without heuristics or policy-switching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular end-to-end framework for legged reactive navigation
Four simulation-trained modules including RL policies and transformer estimator
Lightweight neural networks trained with reward shaping and curriculum