🤖 AI Summary
To address the challenge of poor transferability of agile whole-body locomotion policies for humanoid robots due to sim-to-real dynamics mismatch, this paper proposes a two-stage transfer framework. First, a motion-tracking policy is pre-trained in simulation; second, a residual action model (“delta action model”) is learned from real-robot data and used to back-propagate gradients for fine-tuning the simulated policy. The approach avoids system identification and conservative domain randomization, significantly improving transfer fidelity and sample efficiency. The framework is platform-agnostic—compatible with IsaacGym, IsaacSim, and Genesis—and integrates reinforcement learning, motion retargeting, and closed-loop simulation fine-tuning. Evaluated across three cross-platform tasks, it achieves substantially lower motion tracking error and successfully deploys high-dynamic skills—including backflips and rapid turning—on the Unitree G1. Both agility and coordination surpass those of baseline methods such as SysID and Domain Randomization (DR).
📝 Abstract
Humanoid robots hold the potential for unparalleled versatility in performing human-like, whole-body skills. However, achieving agile and coordinated whole-body motions remains a significant challenge due to the dynamics mismatch between simulation and the real world. Existing approaches, such as system identification (SysID) and domain randomization (DR) methods, often rely on labor-intensive parameter tuning or result in overly conservative policies that sacrifice agility. In this paper, we present ASAP (Aligning Simulation and Real-World Physics), a two-stage framework designed to tackle the dynamics mismatch and enable agile humanoid whole-body skills. In the first stage, we pre-train motion tracking policies in simulation using retargeted human motion data. In the second stage, we deploy the policies in the real world and collect real-world data to train a delta (residual) action model that compensates for the dynamics mismatch. Then, ASAP fine-tunes pre-trained policies with the delta action model integrated into the simulator to align effectively with real-world dynamics. We evaluate ASAP across three transfer scenarios: IsaacGym to IsaacSim, IsaacGym to Genesis, and IsaacGym to the real-world Unitree G1 humanoid robot. Our approach significantly improves agility and whole-body coordination across various dynamic motions, reducing tracking error compared to SysID, DR, and delta dynamics learning baselines. ASAP enables highly agile motions that were previously difficult to achieve, demonstrating the potential of delta action learning in bridging simulation and real-world dynamics. These results suggest a promising sim-to-real direction for developing more expressive and agile humanoids.