🤖 AI Summary
Existing text-to-motion generation methods suffer from limitations in semantic consistency, motion realism, and alignment with human preferences, while post-training approaches are constrained by reliance on a single motion representation, narrow optimization objectives, and high computational costs. To address these issues, this work proposes a unified reinforcement fine-tuning framework that establishes a text-anchored shared semantic space to support multidimensional reward learning and introduces a self-refinement preference learning mechanism that requires no additional annotations. By integrating heterogeneous motion representation mapping, recursive gradient decoupling, and an efficient progressive fine-tuning strategy (EasyTune), the method achieves substantial gains in both efficiency and performance: it attains an FID of 0.132 on MLD with peak memory usage of only 22.10 GB (saving 15.22 GB compared to DRaFT), reduces FID by 22.9% on ACMDM, and improves R-Precision by 12.6% and FID by 23.3% on HY Motion.
📝 Abstract
Text-to-motion generation has advanced with diffusion- and flow-based generative models, yet supervised pretraining remains insufficient to align models with high-level objectives such as semantic consistency, realism, and human preference. Existing post-training methods have key limitations: they (1) target a specific motion representation, such as joints, (2) optimize a particular aspect, such as text-motion alignment, and may compromise other factors; and (3) incur substantial computational overhead, data dependence, and coarse-grained optimization. We present a reinforcement fine-tuning framework that comprises a heterogeneous-representation, multi-dimensional reward model, MotionReward, and an efficient, fine-grained fine-tuning method, EasyTune. To obtain a unified semantics representation, MotionReward maps heterogeneous motions into a shared semantic space anchored by text, enabling multidimensional reward learning; Self-refinement Preference Learning further enhances semantics without additional annotations. For efficient and effective fine-tuning, we identify the recursive gradient dependence across denoising steps as the key bottleneck, and propose EasyTune, which optimizes step-wise rather than over the full trajectory, yielding dense, fine-grained, and memory-efficient updates. Extensive experiments validate the effectiveness of our framework, achieving FID 0.132 at 22.10 GB peak memory for MLD model and saving up to 15.22 GB over DRaFT. It reduces FID by 22.9% on joint-based ACMDM, and achieves a 12.6% R-Precision gain and 23.3% FID improvement on rotation-based HY Motion. Our project page with code is publicly available.