GPO: Growing Policy Optimization for Legged Robot Locomotion and Whole-Body Control

📅 2026-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of inefficient exploration and unstable training in high-dimensional continuous action spaces for torque-controlled legged robots, which stem from sparse gradients. To this end, the authors propose a progressive action space expansion framework that employs a time-varying action transformation mechanism: it initially constrains the action space to facilitate efficient learning and gradually expands it to enhance exploration. While preserving the standard PPO update rules, the method introduces bounded, asymptotically vanishing gradient perturbations to ensure stable and efficient policy optimization. The framework is compatible with both position- and torque-control modes and demonstrates consistent effectiveness across quadruped and hexapod robots. Notably, policies trained in simulation transfer zero-shot to real hardware and significantly outperform existing approaches.

Technology Category

Application Category

📝 Abstract
Training reinforcement learning (RL) policies for legged robots remains challenging due to high-dimensional continuous actions, hardware constraints, and limited exploration. Existing methods for locomotion and whole-body control work well for position-based control with environment-specific heuristics (e.g., reward shaping, curriculum design, and manual initialization), but are less effective for torque-based control, where sufficiently exploring the action space and obtaining informative gradient signals for training is significantly more difficult. We introduce Growing Policy Optimization (GPO), a training framework that applies a time-varying action transformation to restrict the effective action space in the early stage, thereby encouraging more effective data collection and policy learning, and then progressively expands it to enhance exploration and achieve higher expected return. We prove that this transformation preserves the PPO update rule and introduces only bounded, vanishing gradient distortion, thereby ensuring stable training. We evaluate GPO on both quadruped and hexapod robots, including zero-shot deployment of simulation-trained policies on hardware. Policies trained with GPO consistently achieve better performance. These results suggest that GPO provides a general, environment-agnostic optimization framework for learning legged locomotion.
Problem

Research questions and friction points this paper is trying to address.

torque-based control
legged robot locomotion
reinforcement learning
action space exploration
whole-body control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Growing Policy Optimization
torque-based control
action space expansion
legged locomotion
reinforcement learning
🔎 Similar Papers
Shuhao Liao
Shuhao Liao
Beihang University
Multi-agent SystemsReinforcement LearningRobot learning
Peizhuo Li
Peizhuo Li
ETH Zurich
Character AnimationDeep Learning
X
Xinrong Yang
Department of Mechanical Engineering, National University of Singapore, Singapore
L
Linnan Chang
Department of Mechanical Engineering, National University of Singapore, Singapore
Z
Zhaoxin Fan
Beihang University, China
Qing Wang
Qing Wang
Associate Researcher, University of Science and Technology of China
Speech EnhancementRobust Speech RecognitionSpeech Signal ProcessingAcoustic Sound EventAudio-Visual Scene Classification
L
Lei Shi
Henan University, China
Yuhong Cao
Yuhong Cao
National University of Singapore
Robot learningPath Planing
W
Wenjun Wu
Beihang University, China
G
G. Sartoretti
Department of Mechanical Engineering, National University of Singapore, Singapore