🤖 AI Summary
Existing methods struggle to generate semantically consistent and spatially coordinated reactive motion for two interacting agents conditioned on textual descriptions. This paper introduces MoReact, the first framework enabling high-fidelity, text-driven synthesis of two-agent reactive motion. MoReact employs a diffusion-based, multi-stage architecture that decouples global trajectory prediction from local motion refinement, while incorporating an interaction-aware loss function that explicitly models spatiotemporal dependencies and semantic consistency between agents’ motions. By jointly leveraging text-conditioned guidance and interaction-constrained optimization, MoReact significantly improves realism, diversity, and contextual controllability of generated motions on standard two-person motion datasets. It enables dynamic, semantically grounded responses to partner actions—marking a substantial advance in controllable, interactive motion synthesis.
📝 Abstract
Modeling and generating human reactions poses a significant challenge with broad applications for computer vision and human-computer interaction. Existing methods either treat multiple individuals as a single entity, directly generating interactions, or rely solely on one person's motion to generate the other's reaction, failing to integrate the rich semantic information that underpins human interactions. Yet, these methods often fall short in adaptive responsiveness, i.e., the ability to accurately respond to diverse and dynamic interaction scenarios. Recognizing this gap, our work introduces an approach tailored to address the limitations of existing models by focusing on text-driven human reaction generation. Our model specifically generates realistic motion sequences for individuals that responding to the other's actions based on a descriptive text of the interaction scenario. The goal is to produce motion sequences that not only complement the opponent's movements but also semantically fit the described interactions. To achieve this, we present MoReact, a diffusion-based method designed to disentangle the generation of global trajectories and local motions sequentially. This approach stems from the observation that generating global trajectories first is crucial for guiding local motion, ensuring better alignment with given action and text. Furthermore, we introduce a novel interaction loss to enhance the realism of generated close interactions. Our experiments, utilizing data adapted from a two-person motion dataset, demonstrate the efficacy of our approach for this novel task, which is capable of producing realistic, diverse, and controllable reactions that not only closely match the movements of the counterpart but also adhere to the textual guidance. Please find our webpage at https://xiyan-xu.github.io/MoReactWebPage.