🤖 AI Summary
This work addresses the real-time obstacle avoidance challenge for quadrupedal robots in complex 3D environments—particularly under concurrent aerial obstacles, unstructured terrain, and dynamic objects—where sensor noise, illumination sensitivity, and computational overhead severely limit performance. We propose an end-to-end, point-cloud-driven locomotion control framework. Our key contributions are: (1) PD-RiskNet, a novel risk-aware neural network that processes spatiotemporal LiDAR point clouds directly to enable omnidirectional, map-free obstacle avoidance; (2) a high-fidelity LiDAR simulation toolkit supporting cross-physics-engine training (Isaac Gym, Genesis, MuJoCo) and realistic noise modeling; and (3) integration of deep reinforcement learning with robust sim-to-real transfer. Experiments demonstrate significant improvements in traversal success rate and motion stability over conventional map-based approaches in mixed static-dynamic scenarios. The code and trained models will be publicly released.
📝 Abstract
Agile locomotion in complex 3D environments requires robust spatial awareness to safely avoid diverse obstacles such as aerial clutter, uneven terrain, and dynamic agents. Depth-based perception approaches often struggle with sensor noise, lighting variability, computational overhead from intermediate representations (e.g., elevation maps), and difficulties with non-planar obstacles, limiting performance in unstructured environments. In contrast, direct integration of LiDAR sensing into end-to-end learning for legged locomotion remains underexplored. We propose Omni-Perception, an end-to-end locomotion policy that achieves 3D spatial awareness and omnidirectional collision avoidance by directly processing raw LiDAR point clouds. At its core is PD-RiskNet (Proximal-Distal Risk-Aware Hierarchical Network), a novel perception module that interprets spatio-temporal LiDAR data for environmental risk assessment. To facilitate efficient policy learning, we develop a high-fidelity LiDAR simulation toolkit with realistic noise modeling and fast raycasting, compatible with platforms such as Isaac Gym, Genesis, and MuJoCo, enabling scalable training and effective sim-to-real transfer. Learning reactive control policies directly from raw LiDAR data enables the robot to navigate complex environments with static and dynamic obstacles more robustly than approaches relying on intermediate maps or limited sensing. We validate Omni-Perception through real-world experiments and extensive simulation, demonstrating strong omnidirectional avoidance capabilities and superior locomotion performance in highly dynamic environments. We will open-source our code and models.