🤖 AI Summary
Existing text-driven human motion generation methods lack fine-grained, part-level annotations, making it difficult to achieve independent control over individual body parts. To address this limitation, this work introduces the first high-quality motion dataset annotated with atomic-level, temporally aware textual descriptions for each body part, and proposes FrankenMotion—a novel diffusion-based framework that leverages structured, part-level text prompts to enable dual controllability over both spatial (body parts) and temporal (atomic actions) dimensions of motion synthesis. Experimental results demonstrate that FrankenMotion significantly outperforms all adapted and retrained baseline models under this new setting and is capable of composing complex motions not observed during training.
📝 Abstract
Human motion generation from text prompts has made remarkable progress in recent years. However, existing methods primarily rely on either sequence-level or action-level descriptions due to the absence of fine-grained, part-level motion annotations. This limits their controllability over individual body parts. In this work, we construct a high-quality motion dataset with atomic, temporally-aware part-level text annotations, leveraging the reasoning capabilities of large language models (LLMs). Unlike prior datasets that either provide synchronized part captions with fixed time segments or rely solely on global sequence labels, our dataset captures asynchronous and semantically distinct part movements at fine temporal resolution. Based on this dataset, we introduce a diffusion-based part-aware motion generation framework, namely FrankenMotion, where each body part is guided by its own temporally-structured textual prompt. This is, to our knowledge, the first work to provide atomic, temporally-aware part-level motion annotations and have a model that allows motion generation with both spatial (body part) and temporal (atomic action) control. Experiments demonstrate that FrankenMotion outperforms all previous baseline models adapted and retrained for our setting, and our model can compose motions unseen during training. Our code and dataset will be publicly available upon publication.