🤖 AI Summary
Existing symbolic music generation models rely on note-level tokenization, resulting in excessively long sequences, limited context windows, and poor modeling of long-range musical structure. To address these limitations, we propose PhraseLDM, the first phrase-level latent diffusion framework for music generation. PhraseLDM introduces PhraseVAE—a novel variational autoencoder that compresses variable-length musical phrases into compact, high-fidelity 64-dimensional latent representations—and enables non-autoregressive, single-pass generation of full 128-bar polyphonic compositions within a unified multi-track latent space. Integrating variational inference, latent diffusion, and phrase-level sequence modeling, PhraseLDM achieves state-of-the-art efficiency with only 45M parameters, generating 8-minute high-quality multi-instrument pieces in seconds. Quantitative and qualitative evaluations demonstrate significant improvements over baselines in local textural coherence, instrument-specific timbral fidelity, global structural consistency, generative diversity, and scalability.
📝 Abstract
This technical report presents a new paradigm for full-song symbolic music generation. Existing symbolic models operate on note-attribute tokens and suffer from extremely long sequences, limited context length, and weak support for long-range structure. We address these issues by introducing PhraseVAE and PhraseLDM, the first latent diffusion framework designed for full-song multitrack symbolic music. PhraseVAE compresses variable-length polyphonic note sequences into compact 64-dimensional phrase-level representations with high reconstruction fidelity, allowing efficient training and a well-structured latent space. Built on this latent space, PhraseLDM generates an entire multi-track song in a single pass without any autoregressive components. The system eliminates bar-wise sequential modeling, supports up to 128 bars of music (8 minutes in 64 bpm), and produces complete songs with coherent local texture, idiomatic instrument patterns, and clear global structure. With only 45M parameters, our framework generates a full song within seconds while maintaining competitive musical quality and generation diversity. Together, these results show that phrase-level latent diffusion provides an effective and scalable solution to long-sequence modeling in symbolic music generation. We hope this work encourages future symbolic music research to move beyond note-attribute tokens and to consider phrase-level units as a more effective and musically meaningful modeling target.