π€ AI Summary
Modeling multi-agent collaborative motion faces challenges due to complex inter-agent interactions and the lack of unified models supporting diverse scenarios. This paper introduces VIM, a Vision-language Interactive Motion model designed for multi-turn interaction, the first framework unifying language understanding and collaborative motion generation. It supports instruction following, role adaptation, and dynamic motion adjustment. Methodologically, we propose a multi-turn interactive motion synthesis framework, design a residual discrete motion tokenizer, and release INERT-MT2βthe first synthetic dataset for multi-turn interactive motion. Our pipeline comprises motion tokenization, cross-modal alignment pretraining, instruction-tuning, and model-based synthetic data augmentation. VIM achieves state-of-the-art performance across motion-to-text, text-to-motion, reactive motion generation, motion editing, and motion reasoning tasks, significantly improving motion diversity and contextual consistency.
π Abstract
Recent advancements in large language models (LLMs) have greatly enhanced their ability to generate natural and contextually relevant text, making AI interactions more human-like. However, generating and understanding interactive human-like motion, where two individuals engage in coordinated movements, remains a challenge due to the complexity of modeling these coordinated interactions. Furthermore, a versatile model is required to handle diverse interactive scenarios, such as chat systems that follow user instructions or adapt to their assigned role while adjusting interaction dynamics. To tackle this problem, we introduce VIM, short for the Versatile Interactive Motion language model, which integrates both language and motion modalities to effectively understand, generate, and control interactive motions in multi-turn conversational contexts. To address the scarcity of multi-turn interactive motion data, we introduce a synthetic dataset, INERT-MT2, where we utilize pre-trained models to create diverse instructional datasets with interactive motion. Our approach first trains a motion tokenizer that encodes interactive motions into residual discrete tokens. In the pretraining stage, the model learns to align motion and text representations with these discrete tokens. During the instruction fine-tuning stage, VIM adapts to multi-turn conversations using the INTER-MT2 dataset. We evaluate the versatility of our method across motion-related tasks, motion to text, text to motion, reaction generation, motion editing, and reasoning about motion sequences. The results highlight the versatility and effectiveness of proposed method in handling complex interactive motion synthesis.