🤖 AI Summary
To address the core challenge of semantic-physical inconsistency in text-to-human-motion generation, this paper proposes a fine-grained alignment framework centered on Kinematic Phrases (KPs) as an intermediate representation. Methodologically, it introduces KPs as a learnable bridge between natural language and joint-level motion; designs a trainable text-to-KP alignment module; and constructs a multi-stage KP-guided diffusion decoder for progressive motion refinement. The approach integrates large language model–driven text decomposition, KP extraction, and a dedicated alignment loss function. Evaluated on standard benchmarks, our method achieves up to 1.19× and 2.34× improvements over baselines in R-Precision and FID, respectively, attaining state-of-the-art (SOTA) or SOTA-level second performance. This significantly enhances both semantic fidelity—ensuring generated motions accurately reflect textual descriptions—and physical plausibility—guaranteeing kinematically valid, naturalistic motion sequences.
📝 Abstract
Motion synthesis plays a vital role in various fields of artificial intelligence. Among the various conditions of motion generation, text can describe motion details elaborately and is easy to acquire, making text-to-motion(T2M) generation important. State-of-the-art T2M techniques mainly leverage diffusion models to generate motions with text prompts as guidance, tackling the many-to-many nature of T2M tasks. However, existing T2M approaches face challenges, given the gap between the natural language domain and the physical domain, making it difficult to generate motions fully consistent with the texts. We leverage kinematic phrases(KP), an intermediate representation that bridges these two modalities, to solve this. Our proposed method, KETA, decomposes the given text into several decomposed texts via a language model. It trains an aligner to align decomposed texts with the KP segments extracted from the generated motions. Thus, it's possible to restrict the behaviors for diffusion-based T2M models. During the training stage, we deploy the text-KP alignment loss as an auxiliary goal to supervise the models. During the inference stage, we refine our generated motions for multiple rounds in our decoder structure, where we compute the text-KP distance as the guidance signal in each new round. Experiments demonstrate that KETA achieves up to 1.19x, 2.34x better R precision and FID value on both backbones of the base model, motion diffusion model. Compared to a wide range of T2M generation models. KETA achieves either the best or the second-best performance.