DiffRhythm+: Controllable and Flexible Full-Length Song Generation with Preference Optimization

📅 2025-07-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Full-length song generation faces three key challenges: data imbalance, limited stylistic controllability, and inconsistent output quality. To address these, we propose DiffRhythm++, a diffusion-based framework for controllable full-song generation. Our approach constructs a balanced, augmented dataset; introduces a multimodal style conditioning mechanism enabling joint control via text descriptions and reference audio; and incorporates a user-preference-aligned direct performance optimization module. The core innovation lies in the first integrated fusion of multimodal style control and preference-driven optimization, enabling fine-grained stylistic specification and consistent generation quality. Experiments demonstrate that DiffRhythm++ significantly outperforms state-of-the-art methods in naturalness, arrangement complexity, and listener satisfaction. It effectively mitigates lyric repetition and omission, while enhancing expressiveness and musical diversity.

Technology Category

Application Category

📝 Abstract
Songs, as a central form of musical art, exemplify the richness of human intelligence and creativity. While recent advances in generative modeling have enabled notable progress in long-form song generation, current systems for full-length song synthesis still face major challenges, including data imbalance, insufficient controllability, and inconsistent musical quality. DiffRhythm, a pioneering diffusion-based model, advanced the field by generating full-length songs with expressive vocals and accompaniment. However, its performance was constrained by an unbalanced model training dataset and limited controllability over musical style, resulting in noticeable quality disparities and restricted creative flexibility. To address these limitations, we propose DiffRhythm+, an enhanced diffusion-based framework for controllable and flexible full-length song generation. DiffRhythm+ leverages a substantially expanded and balanced training dataset to mitigate issues such as repetition and omission of lyrics, while also fostering the emergence of richer musical skills and expressiveness. The framework introduces a multi-modal style conditioning strategy, enabling users to precisely specify musical styles through both descriptive text and reference audio, thereby significantly enhancing creative control and diversity. We further introduce direct performance optimization aligned with user preferences, guiding the model toward consistently preferred outputs across evaluation metrics. Extensive experiments demonstrate that DiffRhythm+ achieves significant improvements in naturalness, arrangement complexity, and listener satisfaction over previous systems.
Problem

Research questions and friction points this paper is trying to address.

Address data imbalance in full-length song generation
Enhance controllability over musical style and quality
Optimize user preference alignment in song outputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced diffusion-based framework for full-length songs
Multi-modal style conditioning with text and audio
Direct performance optimization for user preferences
🔎 Similar Papers
No similar papers found.
H
Huakang Chen
Audio, Speech and Language Processing Lab (ASLP@NPU)
Yuepeng Jiang
Yuepeng Jiang
Northwestern Polytechnical University
Speech ProcessingSpeech SynthesisVoice Conversion
Guobin Ma
Guobin Ma
Northwestern Polytechnical University
C
Chunbo Hao
Audio, Speech and Language Processing Lab (ASLP@NPU)
S
Shuai Wang
School of Intelligence Science and Technology, Nanjing University, Suzhou, China
J
Jixun Yao
Audio, Speech and Language Processing Lab (ASLP@NPU)
Z
Ziqian Ning
Audio, Speech and Language Processing Lab (ASLP@NPU)
Meng Meng
Meng Meng
Associate Professor, University of Bath
Sustainable transportNetwork modelling and optimisationTravel behaviour analysis
Jian Luan
Jian Luan
Toshiba, Microsoft, Xiaomi
LLMVLMTTSSinging Synthesis
L
Lei Xie
Audio, Speech and Language Processing Lab (ASLP@NPU)