🤖 AI Summary
To address the slow inference and high computational cost of diffusion models in text-driven 3D human motion generation, this work pioneers the integration of Generative Adversarial Networks (GANs) into motion latent space modeling, proposing a lightweight latent-space GAN architecture. Methodologically, it learns compact latent representations while enforcing cross-modal alignment between textual descriptions and motion sequences, achieving high-fidelity generation on HumanML3D and HumanAct12 benchmarks. Experiments demonstrate state-of-the-art performance: an FID of 0.482 on HumanML3D, with 91% fewer FLOPs than latent diffusion baselines—enabling real-time inference. The core contribution lies in challenging the conventional view that GANs are ill-suited for modeling high-dimensional temporal dynamics in motion generation; this work empirically validates that, within a carefully designed latent space, GANs can simultaneously achieve high fidelity and computational efficiency.
📝 Abstract
Human motion synthesis conditioned on textual input has gained significant attention in recent years due to its potential applications in various domains such as gaming, film production, and virtual reality. Conditioned Motion synthesis takes a text input and outputs a 3D motion corresponding to the text. While previous works have explored motion synthesis using raw motion data and latent space representations with diffusion models, these approaches often suffer from high training and inference times. In this paper, we introduce a novel framework that utilizes Generative Adversarial Networks (GANs) in the latent space to enable faster training and inference while achieving results comparable to those of the state-of-the-art diffusion methods. We perform experiments on the HumanML3D, HumanAct12 benchmarks and demonstrate that a remarkably simple GAN in the latent space achieves a FID of 0.482 with more than 91% in FLOPs reduction compared to latent diffusion model. Our work opens up new possibilities for efficient and high-quality motion synthesis using latent space GANs.