Preference Aligned Diffusion Planner for Quadrupedal Locomotion Control

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the over-reliance of diffusion-based planners on data diversity and their insufficient robustness under limited training data in quadrupedal locomotion control, this paper proposes a reward-free two-stage learning framework. In the first stage, an offline diffusion model learns the expert’s state-action joint distribution; in the second stage, online policy optimization is guided by weak preference annotations—requiring neither ground-truth rewards nor human-provided preferences. This work introduces the first diffusion policy fine-tuning method under weak supervision and enables zero-shot transfer to the real-world Unitree Go1 robot. Experiments demonstrate significant improvements in trajectory tracking accuracy and gait stability across multi-gait (pacing, trotting, bounding) and multi-speed locomotion tasks, validating both data efficiency and cross-domain generalization capability.

Technology Category

Application Category

📝 Abstract
Diffusion models demonstrate superior performance in capturing complex distributions from large-scale datasets, providing a promising solution for quadrupedal locomotion control. However, the robustness of the diffusion planner is inherently dependent on the diversity of the pre-collected datasets. To mitigate this issue, we propose a two-stage learning framework to enhance the capability of the diffusion planner under limited dataset (reward-agnostic). Through the offline stage, the diffusion planner learns the joint distribution of state-action sequences from expert datasets without using reward labels. Subsequently, we perform the online interaction in the simulation environment based on the trained offline planner, which significantly diversified the original behavior and thus improves the robustness. Specifically, we propose a novel weak preference labeling method without the ground-truth reward or human preferences. The proposed method exhibits superior stability and velocity tracking accuracy in pacing, trotting, and bounding gait under different speeds and can perform a zero-shot transfer to the real Unitree Go1 robots. The project website for this paper is at https://shangjaven.github.io/preference-aligned-diffusion-legged.
Problem

Research questions and friction points this paper is trying to address.

Enhances quadrupedal locomotion control robustness
Improves diffusion planner with limited datasets
Enables zero-shot transfer to real robots
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage learning framework for diffusion planner
Weak preference labeling without ground-truth rewards
Zero-shot transfer to real quadrupedal robots
🔎 Similar Papers
No similar papers found.
X
Xinyi Yuan
Intelligent Transportation Thrust, System Hub, Hong Kong University of Science and Technology (Guangzhou)
Zhiwei Shang
Zhiwei Shang
The Chinese University of Hong Kong, Shenzhen
Robot LearningReinforcement Learning
Z
Zifan Wang
Intelligent Transportation Thrust, System Hub, Hong Kong University of Science and Technology (Guangzhou)
C
Chenkai Wang
Department of Statistics and Data Science, Southern University of Science and Technology
Z
Zhao Shan
Institute of Artificial Intelligence (TeleAI), China Telecom
Z
Zhenchao Qi
Intelligent Transportation Thrust, System Hub, Hong Kong University of Science and Technology (Guangzhou)
Meixin Zhu
Meixin Zhu
Professor, Southeast University
Autonomous drivingreinforcement learningdriving behaviortraffic flowtraffic safety
Chenjia Bai
Chenjia Bai
Institute of Artificial Intelligence, China Telecom(中国电信人工智能研究院, TeleAI)
Reinforcement LearningRoboticsEmbodied AI
X
Xuelong Li
Institute of Artificial Intelligence (TeleAI), China Telecom