DuetGen: Music Driven Two-Person Dance Generation via Hierarchical Masked Modeling

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses music-driven interactive two-person dance generation, where the core challenge lies in jointly modeling inter-dancer motion coordination and precise music-rhythm alignment. We propose a hierarchical discretization framework: (1) representing synchronized two-person dance as a unified sequence of discrete tokens; (2) designing a two-stage masked generation architecture—coarse-grained (semantic-level) and fine-grained (motion-level)—with a VQ-VAE for compact, learnable motion encoding and a dual-level masked Transformer for autoregressive, co-generative synthesis. Evaluated on multiple public benchmarks, our method significantly improves motion realism, musical synchrony, and inter-dancer coordination. Quantitative metrics and user studies consistently demonstrate superiority over prior approaches. To our knowledge, this is the first end-to-end framework achieving high-fidelity, strongly interactive two-person dance synthesis with explicit modeling of both musical structure and interpersonal dynamics.

Technology Category

Application Category

📝 Abstract
We present DuetGen, a novel framework for generating interactive two-person dances from music. The key challenge of this task lies in the inherent complexities of two-person dance interactions, where the partners need to synchronize both with each other and with the music. Inspired by the recent advances in motion synthesis, we propose a two-stage solution: encoding two-person motions into discrete tokens and then generating these tokens from music. To effectively capture intricate interactions, we represent both dancers' motions as a unified whole to learn the necessary motion tokens, and adopt a coarse-to-fine learning strategy in both the stages. Our first stage utilizes a VQ-VAE that hierarchically separates high-level semantic features at a coarse temporal resolution from low-level details at a finer resolution, producing two discrete token sequences at different abstraction levels. Subsequently, in the second stage, two generative masked transformers learn to map music signals to these dance tokens: the first producing high-level semantic tokens, and the second, conditioned on music and these semantic tokens, producing the low-level tokens. We train both transformers to learn to predict randomly masked tokens within the sequence, enabling them to iteratively generate motion tokens by filling an empty token sequence during inference. Through the hierarchical masked modeling and dedicated interaction representation, DuetGen achieves the generation of synchronized and interactive two-person dances across various genres. Extensive experiments and user studies on a benchmark duet dance dataset demonstrate state-of-the-art performance of DuetGen in motion realism, music-dance alignment, and partner coordination.
Problem

Research questions and friction points this paper is trying to address.

Generating synchronized two-person dances from music
Modeling complex partner interactions in dance motions
Achieving music-dance alignment and partner coordination
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical masked modeling for dance generation
Two-stage VQ-VAE and transformer framework
Coarse-to-fine music-to-dance token learning
🔎 Similar Papers
No similar papers found.