🤖 AI Summary
Existing music generation models struggle to model and edit multiple instrumental parts—such as bass, drums, and others—separately, limiting compositional flexibility. To address this, we introduce the first open-source, high-fidelity autoregressive model for multi-track music generation. Our approach employs track-specific compressed encoders to extract discrete token sequences, constructs a multi-stream Transformer architecture, and incorporates track-wise masked conditioning and source-separation-augmented training. The model enables parallel generation of arbitrary tracks, localized track replacement, and iterative, stepwise composition—all while preserving audio fidelity and inter-track musical coherence. It thus achieves fine-grained, editable multi-track music synthesis. All code, pre-trained weights, and audio samples are publicly released.
📝 Abstract
While most music generation models generate a mixture of stems (in mono or stereo), we propose to train a multi-stem generative model with 3 stems (bass, drums and other) that learn the musical dependencies between them. To do so, we train one specialized compression algorithm per stem to tokenize the music into parallel streams of tokens. Then, we leverage recent improvements in the task of music source separation to train a multi-stream text-to-music language model on a large dataset. Finally, thanks to a particular conditioning method, our model is able to edit bass, drums or other stems on existing or generated songs as well as doing iterative composition (e.g. generating bass on top of existing drums). This gives more flexibility in music generation algorithms and it is to the best of our knowledge the first open-source multi-stem autoregressive music generation model that can perform good quality generation and coherent source editing. Code and model weights will be released and samples are available on https://simonrouard.github.io/musicgenstem/.