InterPrior: Scaling Generative Control for Physics-Based Human-Object Interactions

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling humanoid robots to generalize and generate physically consistent whole-body human-object interaction skills in unseen environments without explicit motion planning. The authors propose InterPrior, a novel framework that unifies variational policies, large-scale imitation pretraining, and reinforcement learning fine-tuning to construct a generative controller capable of synthesizing robust actions from multimodal observations and high-level task intents. By incorporating physics-aware data augmentation through perturbations and subsequent reinforcement learning refinement, the method substantially enhances generalization across previously unseen objects, goals, and initial states. Experimental results demonstrate that InterPrior supports interactive user control and successfully executes a diverse range of previously unencountered interaction tasks on a real-world humanoid robot platform.

Technology Category

Application Category

📝 Abstract
Humans rarely plan whole-body interactions with objects at the level of explicit whole-body movements. High-level intentions, such as affordance, define the goal, while coordinated balance, contact, and manipulation can emerge naturally from underlying physical and motor priors. Scaling such priors is key to enabling humanoids to compose and generalize loco-manipulation skills across diverse contexts while maintaining physically coherent whole-body coordination. To this end, we introduce InterPrior, a scalable framework that learns a unified generative controller through large-scale imitation pretraining and post-training by reinforcement learning. InterPrior first distills a full-reference imitation expert into a versatile, goal-conditioned variational policy that reconstructs motion from multimodal observations and high-level intent. While the distilled policy reconstructs training behaviors, it does not generalize reliably due to the vast configuration space of large-scale human-object interactions. To address this, we apply data augmentation with physical perturbations, and then perform reinforcement learning finetuning to improve competence on unseen goals and initializations. Together, these steps consolidate the reconstructed latent skills into a valid manifold, yielding a motion prior that generalizes beyond the training data, e.g., it can incorporate new behaviors such as interactions with unseen objects. We further demonstrate its effectiveness for user-interactive control and its potential for real robot deployment.
Problem

Research questions and friction points this paper is trying to address.

human-object interaction
whole-body coordination
motion prior
loco-manipulation
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative control
motion prior
human-object interaction
imitation learning
reinforcement learning
🔎 Similar Papers
No similar papers found.
Sirui Xu
Sirui Xu
University of Illinois at Urbana-Champaign
Computer VisionMachine LearningVirtual HumansCharacter AnimationHuman-Object Interaction
Samuel Schulter
Samuel Schulter
Amazon AGI
Computer VisionMachine Learning
M
Morteza Ziyadi
Amazon
X
Xialin He
University of Illinois Urbana-Champaign
Xiaohan Fei
Xiaohan Fei
Amazon Web Services
Computer VisionRoboticsMachine Learning
Y
Yu-Xiong Wang
University of Illinois Urbana-Champaign
L
Liang-Yan Gui
University of Illinois Urbana-Champaign