Towards Adaptable Humanoid Control via Adaptive Motion Tracking

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Humanoid robots deployed in real-world scenarios must simultaneously achieve high-fidelity motion imitation and robust environmental adaptability—yet existing approaches trade off one for the other: optimization-based methods generalize well but suffer from low motion fidelity, while end-to-end imitation learning achieves high accuracy yet relies heavily on extensive motion datasets and precise reference trajectories. To bridge this gap, we propose AdaMimic—a novel framework enabling adaptive motion tracking from a single reference keyframe. Its core innovations include sparse keyframe generation, policy initialization for initial trajectory tracking, and a trainable lightweight adapter for fine-grained temporal warping and low-level motion refinement. Evaluated in simulation and on a physical Unitree G1 robot, AdaMimic significantly improves cross-environment imitation accuracy and robustness, achieving high-fidelity, environment-adaptive real-time motion control under minimal observational input.

Technology Category

Application Category

📝 Abstract
Humanoid robots are envisioned to adapt demonstrated motions to diverse real-world conditions while accurately preserving motion patterns. Existing motion prior approaches enable well adaptability with a few motions but often sacrifice imitation accuracy, whereas motion-tracking methods achieve accurate imitation yet require many training motions and a test-time target motion to adapt. To combine their strengths, we introduce AdaMimic, a novel motion tracking algorithm that enables adaptable humanoid control from a single reference motion. To reduce data dependence while ensuring adaptability, our method first creates an augmented dataset by sparsifying the single reference motion into keyframes and applying light editing with minimal physical assumptions. A policy is then initialized by tracking these sparse keyframes to generate dense intermediate motions, and adapters are subsequently trained to adjust tracking speed and refine low-level actions based on the adjustment, enabling flexible time warping that further improves imitation accuracy and adaptability. We validate these significant improvements in our approach in both simulation and the real-world Unitree G1 humanoid robot in multiple tasks across a wide range of adaptation conditions. Videos and code are available at https://taohuang13.github.io/adamimic.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Enabling adaptable humanoid control from single motion
Balancing imitation accuracy with motion adaptability
Reducing data dependence while ensuring flexible time warping
Innovation

Methods, ideas, or system contributions that make the work stand out.

Augments single motion into sparse keyframes
Initializes policy by tracking sparse keyframes
Trains adapters for flexible time warping
🔎 Similar Papers
No similar papers found.
T
Tao Huang
Shanghai AI Laboratory, Shanghai Jiao Tong University
Huayi Wang
Huayi Wang
Shanghai Jiao Tong University
RoboticsReinforcement Learning
J
Junli Ren
Shanghai AI Laboratory
Kangning Yin
Kangning Yin
Shanghai Jiao Tong University
roboticshumanoidembodied ai
Z
Zirui Wang
Shanghai AI Laboratory
X
Xiao Chen
Shanghai AI Laboratory
F
Feiyu Jia
Shanghai AI Laboratory
Wentao Zhang
Wentao Zhang
Institute of Physics, Chinese Academy of Sciences
photoemissionsuperconductivitycupratehtsctime-resolved
Junfeng Long
Junfeng Long
Ph.D. student, UC Berkeley
Reinforcement LearningRoboticsControls
J
Jingbo Wang
Shanghai AI Laboratory
J
Jiangmiao Pang
Shanghai AI Laboratory