🤖 AI Summary
Existing AI-based dance generation methods predominantly rely on single-person motion-capture data, overlooking the intrinsic spatial interactivity between dancers and lacking deep artistic involvement—resulting in limited artistic applicability. This paper introduces the first AI choreography framework explicitly designed for two-person interactive dance. We establish an artist-co-researched modeling paradigm, propose a novel Probabilistic Attention-enhanced Variational Autoencoder (PA-VAE) that integrates probabilistic modeling with attention mechanisms, and design a custom loss function balancing motion smoothness and choreographic logic. Given a solo dancer’s motion sequence as input, our model generates temporally coherent and spatially coordinated partner dance sequences. Quantitative and qualitative evaluations demonstrate superior generation quality and strong artistic relevance. We publicly release the source code and collaborative guidelines, significantly enhancing AI’s usability and creative support capability for professional choreographers.
📝 Abstract
Existing AI-generated dance methods primarily train on motion capture data from solo dance performances, but a critical feature of dance in nearly any genre is the interaction of two or more bodies in space. Moreover, many works at the intersection of AI and dance fail to incorporate the ideas and needs of the artists themselves into their development process, yielding models that produce far more useful insights for the AI community than for the dance community. This work addresses both needs of the field by proposing an AI method to model the complex interactions between pairs of dancers and detailing how the technical methodology can be shaped by ongoing co-creation with the artistic stakeholders who curated the movement data. Our model is a probability-and-attention-based Variational Autoencoder that generates a choreographic partner conditioned on an input dance sequence. We construct a custom loss function to enhance the smoothness and coherence of the generated choreography. Our code is open-source, and we also document strategies for other interdisciplinary research teams to facilitate collaboration and strong communication between artists and technologists.