Muse: Towards Reproducible Long-Form Song Generation with Fine-Grained Style Control

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the reproducibility challenges in long-form song generation research, which has been hindered by the absence of publicly available datasets, impeding fair comparison and technical progress. We propose an open-source, end-to-end controllable system for long-form song synthesis that leverages the Qwen language model integrated with MuCodec discrete audio tokens, enabling fine-grained style control through single-stage supervised fine-tuning. To support this effort, we release the first open dataset comprising 116,000 licensed synthetic songs, accompanied by a complete pipeline for training, evaluation, and deployment. Experimental results demonstrate superior performance in phoneme error rate, text-to-music style consistency, and audio aesthetic quality, while enabling controllable generation at the segment level of musical structure, thereby significantly enhancing reproducibility in this research domain.

Technology Category

Application Category

📝 Abstract
Recent commercial systems such as Suno demonstrate strong capabilities in long-form song generation, while academic research remains largely non-reproducible due to the lack of publicly available training data, hindering fair comparison and progress. To this end, we release a fully open-source system for long-form song generation with fine-grained style conditioning, including a licensed synthetic dataset, training and evaluation pipelines, and Muse, an easy-to-deploy song generation model. The dataset consists of 116k fully licensed synthetic songs with automatically generated lyrics and style descriptions paired with audio synthesized by SunoV5. We train Muse via single-stage supervised finetuning of a Qwen-based language model extended with discrete audio tokens using MuCodec, without task-specific losses, auxiliary objectives, or additional architectural components. Our evaluations find that although Muse is trained with a modest data scale and model size, it achieves competitive performance on phoneme error rate, text--music style similarity, and audio aesthetic quality, while enabling controllable segment-level generation across different musical structures. All data, model weights, and training and evaluation pipelines will be publicly released, paving the way for continued progress in controllable long-form song generation research. The project repository is available at https://github.com/yuhui1038/Muse.
Problem

Research questions and friction points this paper is trying to address.

reproducibility
long-form song generation
fine-grained style control
training data availability
controllable music generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

fine-grained style control
reproducible song generation
synthetic music dataset
single-stage finetuning
discrete audio tokens
🔎 Similar Papers
No similar papers found.
C
Changhao Jiang
Fudan NLP Group
J
Jiahao Chen
Fudan NLP Group
Z
Zhenghao Xiang
Fudan NLP Group
Z
Zhixiong Yang
Fudan NLP Group
H
Hanchen Wang
Fudan NLP Group
J
Jiabao Zhuang
Fudan NLP Group
X
Xinmeng Che
Fudan NLP Group
J
Jiajun Sun
Fudan NLP Group
H
Hui Li
Fudan NLP Group
Y
Yifei Cao
Fudan NLP Group
Shihan Dou
Shihan Dou
Fudan University
LLMsCode LMsRLAlignment
Ming Zhang
Ming Zhang
复旦大学计算机科学技术学院
LLM
Junjie Ye
Junjie Ye
Fudan University
Computer ScienceNatural Language ProcessingLarge Language ModelsTool Learning
Tao Ji
Tao Ji
中国人民大学
T
Tao Gui
Fudan NLP Group
Qi Zhang
Qi Zhang
Fudan University
SAGINsatellite routing
X
Xuanjing Huang
Fudan NLP Group