🤖 AI Summary
Full-length song generation faces three key challenges: data imbalance, limited stylistic controllability, and inconsistent output quality. To address these, we propose DiffRhythm++, a diffusion-based framework for controllable full-song generation. Our approach constructs a balanced, augmented dataset; introduces a multimodal style conditioning mechanism enabling joint control via text descriptions and reference audio; and incorporates a user-preference-aligned direct performance optimization module. The core innovation lies in the first integrated fusion of multimodal style control and preference-driven optimization, enabling fine-grained stylistic specification and consistent generation quality. Experiments demonstrate that DiffRhythm++ significantly outperforms state-of-the-art methods in naturalness, arrangement complexity, and listener satisfaction. It effectively mitigates lyric repetition and omission, while enhancing expressiveness and musical diversity.
📝 Abstract
Songs, as a central form of musical art, exemplify the richness of human intelligence and creativity. While recent advances in generative modeling have enabled notable progress in long-form song generation, current systems for full-length song synthesis still face major challenges, including data imbalance, insufficient controllability, and inconsistent musical quality. DiffRhythm, a pioneering diffusion-based model, advanced the field by generating full-length songs with expressive vocals and accompaniment. However, its performance was constrained by an unbalanced model training dataset and limited controllability over musical style, resulting in noticeable quality disparities and restricted creative flexibility. To address these limitations, we propose DiffRhythm+, an enhanced diffusion-based framework for controllable and flexible full-length song generation. DiffRhythm+ leverages a substantially expanded and balanced training dataset to mitigate issues such as repetition and omission of lyrics, while also fostering the emergence of richer musical skills and expressiveness. The framework introduces a multi-modal style conditioning strategy, enabling users to precisely specify musical styles through both descriptive text and reference audio, thereby significantly enhancing creative control and diversity. We further introduce direct performance optimization aligned with user preferences, guiding the model toward consistently preferred outputs across evaluation metrics. Extensive experiments demonstrate that DiffRhythm+ achieves significant improvements in naturalness, arrangement complexity, and listener satisfaction over previous systems.