Learning Whole-Body Human-Humanoid Interaction from Human-Human Demonstrations

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of achieving natural, synchronized full-body physical collaboration in humanoid robots, which is hindered by the scarcity of high-quality human–humanoid interaction (HHoI) data. To overcome this limitation, the authors propose a two-stage framework: first, they introduce Physically Aware Interaction Retargeting (PAIR) to generate contact-preserving HHoI data from human–human interaction (HHI) datasets; second, they design a Decoupled Spatio-Temporal Action Reasoner (D-STAR) that decomposes behavioral decision-making into temporal phasing (“when to act”) and spatial action selection (“where to act”), thereby transcending the constraints of conventional trajectory imitation. This approach enables, for the first time, end-to-end contact-centric HHoI learning, significantly outperforming baseline methods in simulation and establishing the first efficient and complete pipeline for learning complex full-body collaborative interactions.

Technology Category

Application Category

📝 Abstract
Enabling humanoid robots to physically interact with humans is a critical frontier, but progress is hindered by the scarcity of high-quality Human-Humanoid Interaction (HHoI) data. While leveraging abundant Human-Human Interaction (HHI) data presents a scalable alternative, we first demonstrate that standard retargeting fails by breaking the essential contacts. We address this with PAIR (Physics-Aware Interaction Retargeting), a contact-centric, two-stage pipeline that preserves contact semantics across morphology differences to generate physically consistent HHoI data. This high-quality data, however, exposes a second failure: conventional imitation learning policies merely mimic trajectories and lack interactive understanding. We therefore introduce D-STAR (Decoupled Spatio-Temporal Action Reasoner), a hierarchical policy that disentangles when to act from where to act. In D-STAR, Phase Attention (when) and a Multi-Scale Spatial module (where) are fused by the diffusion head to produce synchronized whole-body behaviors beyond mimicry. By decoupling these reasoning streams, our model learns robust temporal phases without being distracted by spatial noise, leading to responsive, synchronized collaboration. We validate our framework through extensive and rigorous simulations, demonstrating significant performance gains over baseline approaches and a complete, effective pipeline for learning complex whole-body interactions from HHI data.
Problem

Research questions and friction points this paper is trying to address.

Human-Humanoid Interaction
Imitation Learning
Contact Preservation
Whole-Body Interaction
Interaction Retargeting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-Aware Retargeting
Contact-Centric Interaction
Decoupled Spatio-Temporal Reasoning
Diffusion-Based Policy
Human-Humanoid Interaction
🔎 Similar Papers
No similar papers found.
Wei-Jin Huang
Wei-Jin Huang
Sun Yat-Sen University
Video Action UnderstandingHumanoid Learning
Y
Yue-Yi Zhang
School of Computer Science and Engineering, Sun Yat-sen University, China
Yi-Lin Wei
Yi-Lin Wei
Sun Yat-sen University
Z
Zhi-Wei Xia
School of Computer Science and Engineering, Sun Yat-sen University, China
J
Juantao Tan
School of Computer Science and Engineering, Sun Yat-sen University, China
Yuan-Ming Li
Yuan-Ming Li
Sun Yat-sen University
Computer Vision
Z
Zhilin Zhao
School of Computer Science and Engineering, Sun Yat-sen University, China
Wei-Shi Zheng
Wei-Shi Zheng
Professor @ SUN YAT-SEN UNIVERSITY
Computer VisionPattern RecognitionMachine Learning