OmniMoGen: Unifying Human Motion Generation via Learning from Interleaved Text-Motion Instructions

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing motion generation methods predominantly adhere to single-task paradigms, limiting their capacity for free composition, multi-objective optimization, and unified instruction-driven modeling. To address this, we propose the first general-purpose human motion generation framework, introducing a novel text–motion interleaved instruction modeling paradigm. Our approach integrates emergent capabilities—including composable editing, self-reflective generation, and knowledge-guided synthesis—built upon a lightweight RVQ-VAE encoder-decoder and a Transformer-based generator. We curate X2Mo, a large-scale (137K) interleaved instruction dataset, and design AnyContext, a comprehensive benchmark for evaluating contextual reasoning across diverse motion tasks. Extensive experiments demonstrate state-of-the-art performance on text-to-motion generation, motion editing, and the AnyContext benchmark, significantly improving cross-task generalization and robustness in executing complex, multi-step instructions.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have unified diverse linguistic tasks within a single framework, yet such unification remains unexplored in human motion generation. Existing methods are confined to isolated tasks, limiting flexibility for free-form and omni-objective generation. To address this, we propose OmniMoGen, a unified framework that enables versatile motion generation through interleaved text-motion instructions. Built upon a concise RVQ-VAE and transformer architecture, OmniMoGen supports end-to-end instruction-driven motion generation. We construct X2Mo, a large-scale dataset of over 137K interleaved text-motion instructions, and introduce AnyContext, a benchmark for evaluating interleaved motion generation. Experiments show that OmniMoGen achieves state-of-the-art performance on text-to-motion, motion editing, and AnyContext, exhibiting emerging capabilities such as compositional editing, self-reflective generation, and knowledge-informed generation. These results mark a step toward the next intelligent motion generation. Project Page: https://OmniMoGen.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Unifies diverse human motion generation tasks
Enables free-form and omni-objective motion generation
Supports instruction-driven motion generation via text-motion instructions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for motion generation via interleaved instructions
Built on RVQ-VAE and transformer for end-to-end generation
Uses large-scale dataset X2Mo with over 137K instructions
🔎 Similar Papers
No similar papers found.