🤖 AI Summary
Current lyric-to-song generation models (e.g., DiffRhythm, ACE-Step, LeVo) lack fine-grained, word-level temporal alignment and duration control, limiting their applicability in professional music production. To address this, we propose the first lightweight flow-matching model enabling end-to-end word-level timing alignment and controllable phoneme duration. We further introduce a synthetic-data-driven aesthetic alignment mechanism—requiring no manual annotation—and integrate Direct Preference Optimization (DPO) to enhance auditory naturalness and alignment with human preferences. We construct JAME, a dedicated evaluation dataset, to rigorously assess model performance. Experiments demonstrate that our method significantly outperforms existing approaches across key musical attributes—including audio fidelity, rhythmic accuracy, and vocal naturalness—while achieving substantial improvements in perceptual quality as validated by human listeners.
📝 Abstract
Diffusion and flow-matching models have revolutionized automatic text-to-audio generation in recent times. These models are increasingly capable of generating high quality and faithful audio outputs capturing to speech and acoustic events. However, there is still much room for improvement in creative audio generation that primarily involves music and songs. Recent open lyrics-to-song models, such as, DiffRhythm, ACE-Step, and LeVo, have set an acceptable standard in automatic song generation for recreational use. However, these models lack fine-grained word-level controllability often desired by musicians in their workflows. To the best of our knowledge, our flow-matching-based JAM is the first effort toward endowing word-level timing and duration control in song generation, allowing fine-grained vocal control. To enhance the quality of generated songs to better align with human preferences, we implement aesthetic alignment through Direct Preference Optimization, which iteratively refines the model using a synthetic dataset, eliminating the need or manual data annotations. Furthermore, we aim to standardize the evaluation of such lyrics-to-song models through our public evaluation dataset JAME. We show that JAM outperforms the existing models in terms of the music-specific attributes.