🤖 AI Summary
Existing motion generation methods predominantly adhere to single-task paradigms, limiting their capacity for free composition, multi-objective optimization, and unified instruction-driven modeling. To address this, we propose the first general-purpose human motion generation framework, introducing a novel text–motion interleaved instruction modeling paradigm. Our approach integrates emergent capabilities—including composable editing, self-reflective generation, and knowledge-guided synthesis—built upon a lightweight RVQ-VAE encoder-decoder and a Transformer-based generator. We curate X2Mo, a large-scale (137K) interleaved instruction dataset, and design AnyContext, a comprehensive benchmark for evaluating contextual reasoning across diverse motion tasks. Extensive experiments demonstrate state-of-the-art performance on text-to-motion generation, motion editing, and the AnyContext benchmark, significantly improving cross-task generalization and robustness in executing complex, multi-step instructions.
📝 Abstract
Large language models (LLMs) have unified diverse linguistic tasks within a single framework, yet such unification remains unexplored in human motion generation. Existing methods are confined to isolated tasks, limiting flexibility for free-form and omni-objective generation. To address this, we propose OmniMoGen, a unified framework that enables versatile motion generation through interleaved text-motion instructions. Built upon a concise RVQ-VAE and transformer architecture, OmniMoGen supports end-to-end instruction-driven motion generation. We construct X2Mo, a large-scale dataset of over 137K interleaved text-motion instructions, and introduce AnyContext, a benchmark for evaluating interleaved motion generation. Experiments show that OmniMoGen achieves state-of-the-art performance on text-to-motion, motion editing, and AnyContext, exhibiting emerging capabilities such as compositional editing, self-reflective generation, and knowledge-informed generation. These results mark a step toward the next intelligent motion generation. Project Page: https://OmniMoGen.github.io/.