π€ AI Summary
This paper addresses three key challenges in music-driven 2D dance pose generation: poor temporal coherence, difficulty in rhythm alignment, and weak generalization to real-world scenarios. To this end, we propose a multi-channel image synthesis framework based on the diffusion Transformer (DiT). Dance sequences are encoded as one-hot images and compressed into latent representations via a pre-trained image VAE. We introduce a time-shared temporal indexing mechanism to enable precise cross-modal alignment between musical tokens and pose latents. Additionally, a reference-pose conditioning strategy is designed to ensure anatomical consistency and stability during long-sequence segment stitching. Evaluated on AIST++ 2D and a large-scale in-the-wild dataset, our method achieves state-of-the-art performance across FID, average pose distance (APD), actionβmusic synchronization accuracy, and human preference scores. Ablation studies confirm the significant contributions of each component.
π Abstract
Recent pose-to-video models can translate 2D pose sequences into photorealistic, identity-preserving dance videos, so the key challenge is to generate temporally coherent, rhythm-aligned 2D poses from music, especially under complex, high-variance in-the-wild distributions. We address this by reframing music-to-dance generation as a music-token-conditioned multi-channel image synthesis problem: 2D pose sequences are encoded as one-hot images, compressed by a pretrained image VAE, and modeled with a DiT-style backbone, allowing us to inherit architectural and training advances from modern text-to-image models and better capture high-variance 2D pose distributions. On top of this formulation, we introduce (i) a time-shared temporal indexing scheme that explicitly synchronizes music tokens and pose latents over time and (ii) a reference-pose conditioning strategy that preserves subject-specific body proportions and on-screen scale while enabling long-horizon segment-and-stitch generation. Experiments on a large in-the-wild 2D dance corpus and the calibrated AIST++2D benchmark show consistent improvements over representative music-to-dance methods in pose- and video-space metrics and human preference, and ablations validate the contributions of the representation, temporal indexing, and reference conditioning. See supplementary videos at https://hot-dance.github.io