Whole-Body Constrained Learning for Legged Locomotion via Hierarchical Optimization

📅 2025-06-05
🏛️ IEEE Robotics and Automation Letters
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Legged robots deployed in high-safety applications—such as planetary exploration and nuclear facility inspection—face critical risks including joint collisions, excessive torque, and foot slippage, primarily due to the sim-to-real gap and the opacity of learned policies. Method: This paper proposes a hierarchical optimization-driven whole-body constrained reinforcement learning (RL) framework. It innovatively integrates kinematic and dynamic hard constraints with stability-oriented soft constraints throughout both RL training and real-world deployment, synergistically combining model-based control with RL to ensure policy safety and interpretability. Results: Evaluated on a hexapod robot, the framework significantly improves traversal capability and operational safety across challenging unstructured terrains—including snow-covered slopes and staircases—while effectively bridging the performance gap between simulation and reality.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has demonstrated impressive performance in legged locomotion over various challenging environments. However, due to the sim-to-real gap and lack of explainability, unconstrained RL policies deployed in the real world still suffer from inevitable safety issues, such as joint collisions, excessive torque, or foot slippage in low-friction environments. These problems limit its usage in missions with strict safety requirements, such as planetary exploration, nuclear facility inspection, and deep-sea operations. In this paper, we design a hierarchical optimization-based whole-body follower, which integrates both hard and soft constraints into RL framework to make the robot move with better safety guarantees. Leveraging the advantages of model-based control, our approach allows for the definition of various types of hard and soft constraints during training or deployment, which allows for policy fine-tuning and mitigates the challenges of sim-to-real transfer. Meanwhile, it preserves the robustness of RL when dealing with locomotion in complex unstructured environments. The trained policy with introduced constraints was deployed in a hexapod robot and tested in various outdoor environments, including snow-covered slopes and stairs, demonstrating the great traversability and safety of our approach.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety in legged locomotion via constrained RL
Addressing sim-to-real gap with hierarchical optimization
Enhancing robot mobility in complex unstructured environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical optimization integrates hard and soft constraints
Model-based control enables constraint definition and fine-tuning
Combines RL robustness with safety for complex environments
🔎 Similar Papers
No similar papers found.
H
Haoyu Wang
State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
Ruyi Zhou
Ruyi Zhou
State Key Laboratory of Robotics and System, Harbin Institute of Technology
RoboticsSpace roboticsWheeled mobile robotsScene physical understanding
L
Liang Ding
State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
T
Tie Liu
State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
Z
Zhelin Zhang
State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
P
Peng Xu
State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
H
Haibo Gao
State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China
Z
Zongquan Deng
State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150001, China