Making Avatars Interact: Towards Text-Driven Human-Object Interaction for Controllable Talking Avatars

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to generate controllable talking avatars that exhibit embodied, text-aligned interactions with surrounding objects, primarily due to limited environmental awareness and the trade-off between controllability and visual fidelity. To address this, this work proposes InteractAvatar, a novel dual-stream framework that achieves, for the first time, text-driven generation of embodied talking avatars capable of object interaction. The approach decouples environmental perception from video synthesis through two parallel modules: a Perception-Interaction Module (PIM) and an Audio-Interaction-aware Generation Module (AIM). Enhanced environmental understanding is achieved via object detection, while a motion-video aligner ensures consistency between generated motions and semantic content. Extensive experiments on the newly introduced GroundedInter benchmark demonstrate that InteractAvatar significantly outperforms existing methods, producing high-quality, semantically aligned interactive talking avatar videos.

Technology Category

Application Category

📝 Abstract
Generating talking avatars is a fundamental task in video generation. Although existing methods can generate full-body talking avatars with simple human motion, extending this task to grounded human-object interaction (GHOI) remains an open challenge, requiring the avatar to perform text-aligned interactions with surrounding objects. This challenge stems from the need for environmental perception and the control-quality dilemma in GHOI generation. To address this, we propose a novel dual-stream framework, InteractAvatar, which decouples perception and planning from video synthesis for grounded human-object interaction. Leveraging detection to enhance environmental perception, we introduce a Perception and Interaction Module (PIM) to generate text-aligned interaction motions. Additionally, an Audio-Interaction Aware Generation Module (AIM) is proposed to synthesize vivid talking avatars performing object interactions. With a specially designed motion-to-video aligner, PIM and AIM share a similar network structure and enable parallel co-generation of motions and plausible videos, effectively mitigating the control-quality dilemma. Finally, we establish a benchmark, GroundedInter, for evaluating GHOI video generation. Extensive experiments and comparisons demonstrate the effectiveness of our method in generating grounded human-object interactions for talking avatars. Project page: https://interactavatar.github.io
Problem

Research questions and friction points this paper is trying to address.

talking avatars
human-object interaction
text-driven generation
environmental perception
controllable video synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

text-driven human-object interaction
talking avatars
dual-stream framework
environmental perception
motion-to-video alignment
🔎 Similar Papers