🤖 AI Summary
Text-to-motion (T2M) generation suffers from limited diversity, motion discontinuity, and modality collapse. To address these challenges, we propose RAG-MoMask—a novel framework integrating retrieval-augmented generation (RAG) with masked modeling. Our key contributions are: (1) a bidirectional momentum text-action model to enhance cross-modal retrieval accuracy; (2) a semantic spatiotemporal attention mechanism ensuring biomechanically plausible motion synthesis; and (3) an RAG-enhanced classifier-free guidance strategy to improve generalization. Built upon the MoMask RVQ-VAE architecture—incorporating a momentum queue and classifier-free guidance—our method achieves FID improvements of 3.88% on HumanML3D and 10.97% on KIT-ML over RAG-T2M. It enables high-fidelity, semantically aligned motion generation with significantly fewer sampling steps, demonstrating superior performance in diversity, continuity, and physical plausibility.
📝 Abstract
Text-to-Motion (T2M) generation aims to synthesize realistic and semantically aligned human motion sequences from natural language descriptions. However, current approaches face dual challenges: Generative models (e.g., diffusion models) suffer from limited diversity, error accumulation, and physical implausibility, while Retrieval-Augmented Generation (RAG) methods exhibit diffusion inertia, partial-mode collapse, and asynchronous artifacts. To address these limitations, we propose ReMoMask, a unified framework integrating three key innovations: 1) A Bidirectional Momentum Text-Motion Model decouples negative sample scale from batch size via momentum queues, substantially improving cross-modal retrieval precision; 2) A Semantic Spatio-temporal Attention mechanism enforces biomechanical constraints during part-level fusion to eliminate asynchronous artifacts; 3) RAG-Classier-Free Guidance incorporates minor unconditional generation to enhance generalization. Built upon MoMask's RVQ-VAE, ReMoMask efficiently generates temporally coherent motions in minimal steps. Extensive experiments on standard benchmarks demonstrate the state-of-the-art performance of ReMoMask, achieving a 3.88% and 10.97% improvement in FID scores on HumanML3D and KIT-ML, respectively, compared to the previous SOTA method RAG-T2M. Code: https://github.com/AIGeeksGroup/ReMoMask. Website: https://aigeeksgroup.github.io/ReMoMask.