🤖 AI Summary
Data scarcity and limited model scalability hinder the development of diffusion-based singing synthesis. To address these challenges, this work proposes a two-stage solution: First, a high-quality Chinese singing dataset exceeding 500 hours is constructed by leveraging large language models (LLMs) to generate diverse lyrics aligned with fixed melodies. Second, DiTSinger—a novel diffusion singing synthesizer—is introduced, integrating a Diffusion Transformer architecture with an implicit alignment mechanism that eliminates reliance on phoneme-level duration annotations; instead, character-level speech attention constraints enhance alignment robustness. Furthermore, the model’s depth, width, and feature resolution are systematically scaled to improve representational capacity. Experiments demonstrate stable training and high-fidelity synthesis even without precise alignment labels, achieving significant improvements over existing diffusion-based methods in scalability, robustness, and audio quality.
📝 Abstract
Recent progress in diffusion-based Singing Voice Synthesis (SVS) demonstrates strong expressiveness but remains limited by data scarcity and model scalability. We introduce a two-stage pipeline: a compact seed set of human-sung recordings is constructed by pairing fixed melodies with diverse LLM-generated lyrics, and melody-specific models are trained to synthesize over 500 hours of high-quality Chinese singing data. Building on this corpus, we propose DiTSinger, a Diffusion Transformer with RoPE and qk-norm, systematically scaled in depth, width, and resolution for enhanced fidelity. Furthermore, we design an implicit alignment mechanism that obviates phoneme-level duration labels by constraining phoneme-to-acoustic attention within character-level spans, thereby improving robustness under noisy or uncertain alignments. Extensive experiments validate that our approach enables scalable, alignment-free, and high-fidelity SVS.