Omni-Perception: Omnidirectional Collision Avoidance for Legged Locomotion in Dynamic Environments

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the real-time obstacle avoidance challenge for quadrupedal robots in complex 3D environments—particularly under concurrent aerial obstacles, unstructured terrain, and dynamic objects—where sensor noise, illumination sensitivity, and computational overhead severely limit performance. We propose an end-to-end, point-cloud-driven locomotion control framework. Our key contributions are: (1) PD-RiskNet, a novel risk-aware neural network that processes spatiotemporal LiDAR point clouds directly to enable omnidirectional, map-free obstacle avoidance; (2) a high-fidelity LiDAR simulation toolkit supporting cross-physics-engine training (Isaac Gym, Genesis, MuJoCo) and realistic noise modeling; and (3) integration of deep reinforcement learning with robust sim-to-real transfer. Experiments demonstrate significant improvements in traversal success rate and motion stability over conventional map-based approaches in mixed static-dynamic scenarios. The code and trained models will be publicly released.

Technology Category

Application Category

📝 Abstract
Agile locomotion in complex 3D environments requires robust spatial awareness to safely avoid diverse obstacles such as aerial clutter, uneven terrain, and dynamic agents. Depth-based perception approaches often struggle with sensor noise, lighting variability, computational overhead from intermediate representations (e.g., elevation maps), and difficulties with non-planar obstacles, limiting performance in unstructured environments. In contrast, direct integration of LiDAR sensing into end-to-end learning for legged locomotion remains underexplored. We propose Omni-Perception, an end-to-end locomotion policy that achieves 3D spatial awareness and omnidirectional collision avoidance by directly processing raw LiDAR point clouds. At its core is PD-RiskNet (Proximal-Distal Risk-Aware Hierarchical Network), a novel perception module that interprets spatio-temporal LiDAR data for environmental risk assessment. To facilitate efficient policy learning, we develop a high-fidelity LiDAR simulation toolkit with realistic noise modeling and fast raycasting, compatible with platforms such as Isaac Gym, Genesis, and MuJoCo, enabling scalable training and effective sim-to-real transfer. Learning reactive control policies directly from raw LiDAR data enables the robot to navigate complex environments with static and dynamic obstacles more robustly than approaches relying on intermediate maps or limited sensing. We validate Omni-Perception through real-world experiments and extensive simulation, demonstrating strong omnidirectional avoidance capabilities and superior locomotion performance in highly dynamic environments. We will open-source our code and models.
Problem

Research questions and friction points this paper is trying to address.

Achieving 3D spatial awareness for legged robots in dynamic environments
Overcoming sensor noise and computational overhead in depth-based perception
Enabling robust omnidirectional collision avoidance using raw LiDAR data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct LiDAR point cloud processing for 3D awareness
PD-RiskNet for spatio-temporal risk assessment
High-fidelity LiDAR simulation toolkit for training
🔎 Similar Papers
No similar papers found.
Z
Zifan Wang
The Hong Kong University of Science and Technology (Guangzhou)
Teli Ma
Teli Ma
HKUST(GZ) | Shanghai AI Laboratory
Computer VisionVision-LanguageRobotics
Y
Yufei Jia
Department of Eletronic Engineering, Tsinghua University
X
Xun Yang
The Hong Kong University of Science and Technology (Guangzhou)
J
Jiaming Zhou
The Hong Kong University of Science and Technology (Guangzhou)
W
Wenlong Ouyang
The Hong Kong University of Science and Technology (Guangzhou)
Q
Qiang Zhang
Beijing Innovation Center of Humanoid Robotics Co., Ltd.
Junwei Liang
Junwei Liang
Assistant Professor, HKUST (Guangzhou) | CSE, HKUST | Ph.D. @CMU
Computer VisionRoboticsEmbodied AITrajectory Prediction