Human-Object Interaction with Vision-Language Model Guided Relative Movement Dynamics

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for human-object interaction (HOI) struggle to simultaneously ensure physical plausibility and diverse interaction type support. To address this, we propose Interplay—a unified framework comprising three key components: (1) a Relative Motion Dynamics (RMD) graph that explicitly models continuous, stable motion relationships between human body parts and object components; (2) vision-language model (VLM)-guided end-to-end mapping of natural language instructions to RMD representations, enabling goal-directed reinforcement learning for long-horizon, physically grounded interactions with both static scenes and dynamic/articulated objects; and (3) Interplay, the first 3D HOI benchmark supporting multi-turn dynamic interactions. Experiments demonstrate substantial improvements in interaction stability, generalization across unseen objects and actions, and task completion rates—outperforming prior work comprehensively in both realism and interaction diversity.

Technology Category

Application Category

📝 Abstract
Human-Object Interaction (HOI) is vital for advancing simulation, animation, and robotics, enabling the generation of long-term, physically plausible motions in 3D environments. However, existing methods often fall short of achieving physics realism and supporting diverse types of interactions. To address these challenges, this paper introduces a unified Human-Object Interaction framework that provides unified control over interactions with static scenes and dynamic objects using language commands. The interactions between human and object parts can always be described as the continuous stable Relative Movement Dynamics (RMD) between human and object parts. By leveraging the world knowledge and scene perception capabilities of Vision-Language Models (VLMs), we translate language commands into RMD diagrams, which are used to guide goal-conditioned reinforcement learning for sequential interaction with objects. Our framework supports long-horizon interactions among dynamic, articulated, and static objects. To support the training and evaluation of our framework, we present a new dataset named Interplay, which includes multi-round task plans generated by VLMs, covering both static and dynamic HOI tasks. Extensive experiments demonstrate that our proposed framework can effectively handle a wide range of HOI tasks, showcasing its ability to maintain long-term, multi-round transitions. For more details, please refer to our project webpage: https://rmd-hoi.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Achieving physics realism in Human-Object Interaction
Supporting diverse interaction types with language commands
Generating long-term, multi-round HOI motions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Vision-Language Models for command translation
Uses Relative Movement Dynamics diagrams for guidance
Employs goal-conditioned reinforcement learning for interactions
🔎 Similar Papers
No similar papers found.