🤖 AI Summary
To address semantic misalignment and kinematic artifacts in text-to-motion generation, this paper proposes a dual-path anchored diffusion model. Our method introduces a time-frequency collaborative alignment mechanism, incorporating a lightweight MoCLIP encoder and a DCT-based low-frequency semantic extraction module. An adaptive temporal modulation module is further designed to dynamically fuse coarse- and fine-grained semantics during denoising, mitigating deep-layer gradient decay. Evaluated on HumanML3D and KIT-ML, our approach achieves state-of-the-art FID scores of 0.035 and 0.123, respectively, with 1.4× faster convergence than baseline methods. The core innovations lie in the time-frequency dual-path anchoring mechanism and semantic-aware adaptive temporal modulation, which jointly enhance motion fidelity and text-motion alignment accuracy.
📝 Abstract
While current diffusion-based models, typically built on U-Net architectures, have shown promising results on the text-to-motion generation task, they still suffer from semantic misalignment and kinematic artifacts. Through analysis, we identify severe gradient attenuation in the deep layers of the network as a key bottleneck, leading to insufficient learning of high-level features. To address this issue, we propose extbf{LUMA} ( extit{ extbf{L}ow-dimension extbf{U}nified extbf{M}otion extbf{A}lignment}), a text-to-motion diffusion model that incorporates dual-path anchoring to enhance semantic alignment. The first path incorporates a lightweight MoCLIP model trained via contrastive learning without relying on external data, offering semantic supervision in the temporal domain. The second path introduces complementary alignment signals in the frequency domain, extracted from low-frequency DCT components known for their rich semantic content. These two anchors are adaptively fused through a temporal modulation mechanism, allowing the model to progressively transition from coarse alignment to fine-grained semantic refinement throughout the denoising process. Experimental results on HumanML3D and KIT-ML demonstrate that LUMA achieves state-of-the-art performance, with FID scores of 0.035 and 0.123, respectively. Furthermore, LUMA accelerates convergence by 1.4$ imes$ compared to the baseline, making it an efficient and scalable solution for high-fidelity text-to-motion generation.