🤖 AI Summary
Existing group dance generation methods struggle with high computational complexity, inadequate interaction modeling, and motion collisions in long-duration, multi-dancer scenarios, hindering efficient and stable interactive applications. This work proposes a scalable spatiotemporal decoupled diffusion framework that models inter-dancer spatial relationships via lightweight distance-aware graph convolution and introduces a stream-friendly temporal alignment attention mask alongside a tailored noise scheduling strategy. These designs significantly enhance generation efficiency and coordination for long sequences while preserving high motion quality. Experiments on the AIOZ-GDance dataset demonstrate that the proposed method achieves generation quality comparable to state-of-the-art approaches but with lower inference latency, enabling scalable group choreography synthesis for extended durations and larger numbers of dancers.
📝 Abstract
Group dance generation from music requires synchronizing multiple dancers while maintaining spatial coordination, making it highly relevant to applications such as film production, gaming, and animation. Recent group dance generation models have achieved promising generation quality, but they remain difficult to deploy in interactive scenarios due to bidirectional attention dependencies. As the number of dancers and the sequence length increase, the attention computation required for aligning music conditions with motion sequences grows quadratically, leading to reduced efficiency and increased risk of motion collisions. Effectively modeling dense spatial-temporal interactions is therefore essential, yet existing methods often struggle to capture such complexity, resulting in limited scalability and unstable multi-dancer coordination. To address these challenges, we propose ST-GDance++, a scalable framework that decouples spatial and temporal dependencies to enable efficient and collision-aware group choreography generation. For spatial modeling, we introduce lightweight distance-aware graph convolutions to capture inter-dancer relationships while reducing computational overhead. For temporal modeling, we design a diffusion noise scheduling strategy together with an efficient temporal-aligned attention mask, enabling stream-based generation for long motion sequences and improving scalability in long-duration scenarios. Experiments on the AIOZ-GDance dataset show that ST-GDance++ achieves competitive generation quality with significantly reduced latency compared to existing methods.