🤖 AI Summary
This work addresses the high computational cost and reliance on proprietary data in existing text-to-music generation methods by introducing the first application of state space models (SSMs) to this task. Leveraging a diffusion-based framework trained entirely from scratch on publicly available music data under Creative Commons licenses (457 hours), the authors propose both single-stage and two-stage hybrid architectures with only 300M parameters. The resulting model achieves comparable objective metrics and subjective audio quality to MusicGen-small while requiring merely 9% of the FLOPs and 2% of the training data. Notably, even when compressed to one-quarter of its original size, the model remains competitive. The study significantly enhances reproducibility and accessibility in the field through the full open-sourcing of code, data, and trained models.
📝 Abstract
Recent advances in text-to-music generation (TTM) have yielded high-quality results, but often at the cost of extensive compute and the use of large proprietary internal data. To improve the affordability and openness of TTM training, an open-source generative model backbone that is more training- and data-efficient is needed. In this paper, we constrain the number of trainable parameters in the generative model to match that of the MusicGen-small benchmark (with about 300M parameters), and replace its Transformer backbone with the emerging class of state-space models (SSMs). Specifically, we explore different SSM variants for sequence modeling, and compare a single-stage SSM-based design with a decomposable two-stage SSM/diffusion hybrid design. All proposed models are trained from scratch on a purely public dataset comprising 457 hours of CC-licensed music, ensuring full openness. Our experimental findings are three-fold. First, we show that SSMs exhibit superior training efficiency compared to the Transformer counterpart. Second, despite using only 9% of the FLOPs and 2% of the training data size compared to the MusicGen-small benchmark, our model achieves competitive performance in both objective metrics and subjective listening tests based on MusicCaps captions. Finally, our scaling-down experiment demonstrates that SSMs can maintain competitive performance relative to the Transformer baseline even at the same training budget (measured in iterations), when the model size is reduced to four times smaller. To facilitate the democratization of TTM research, the processed captions, model checkpoints, and source code are available on GitHub via the project page: https://lonian6.github.io/ssmttm/.