🤖 AI Summary
Diffusion models for text-to-3D human motion generation suffer from misalignment between textual semantics and motion distributions. Method: This paper proposes a Dynamic Alignment Optimization framework featuring a step-aware reward model and reward-guided sampling strategy to jointly optimize semantic consistency and motion realism during denoising; it further introduces a dual-path alignment mechanism integrating step-aware tokenization, text-alignment, and motion-alignment modules to dynamically balance probabilistic density modeling with semantic constraints. Contribution/Results: Evaluated on multiple benchmarks, our method significantly outperforms state-of-the-art approaches in both motion generation quality and cross-modal retrieval performance, achieving substantial improvements in text-motion alignment accuracy and visual fidelity.
📝 Abstract
Text-to-motion generation, which synthesizes 3D human motions from text inputs, holds immense potential for applications in gaming, film, and robotics. Recently, diffusion-based methods have been shown to generate more diversity and realistic motion. However, there exists a misalignment between text and motion distributions in diffusion models, which leads to semantically inconsistent or low-quality motions. To address this limitation, we propose Reward-guided sampling Alignment (ReAlign), comprising a step-aware reward model to assess alignment quality during the denoising sampling and a reward-guided strategy that directs the diffusion process toward an optimally aligned distribution. This reward model integrates step-aware tokens and combines a text-aligned module for semantic consistency and a motion-aligned module for realism, refining noisy motions at each timestep to balance probability density and alignment. Extensive experiments of both motion generation and retrieval tasks demonstrate that our approach significantly improves text-motion alignment and motion quality compared to existing state-of-the-art methods.