Generative Multi-modal Feedback for Singing Voice Synthesis Evaluation

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current singing voice synthesis (SVS) evaluation relies on scalar metrics, failing to capture multidimensional perceptual attributes—such as expressiveness—while suffering from high annotation costs and poor interpretability. To address this, we propose the first generative multimodal feedback framework that jointly processes audio and text inputs to model linguistic and acoustic representations, enabling automatic generation of fine-grained critiques across melody accuracy, lyric fidelity, and timbral expressiveness. Our approach innovatively incorporates human music-informed feedback to guide large language models in synthesizing musically grounded commentary and employs a hybrid data fine-tuning strategy to enhance musical semantic precision. Experiments demonstrate that our generated feedback achieves superior musical plausibility and actionable guidance compared to baselines, with strong correlation to expert judgments. The framework effectively supports iterative SVS model refinement, offering a scalable, interpretable, and musically informed alternative to conventional evaluation paradigms.

Technology Category

Application Category

📝 Abstract
Singing voice synthesis (SVS) has advanced significantly, enabling models to generate vocals with accurate pitch and consistent style. As these capabilities improve, the need for reliable evaluation and optimization becomes increasingly critical. However, current methods like reward systems often rely on single numerical scores, struggle to capture various dimensions such as phrasing or expressiveness, and require costly annotations, limiting interpretability and generalization. To address these issues, we propose a generative feedback (i.e., reward model) framework that provides multi-dimensional language and audio feedback for SVS assessment. Our approach leverages an audio-language model to generate text and audio critiques-covering aspects such as melody, content, and auditory quality. The model is fine-tuned on a hybrid dataset combining human music reactions and synthetic critiques from a MLLMs, enhancing diversity and linguistic richness. Quantitative experiments validate the effectiveness of the proposed dataset and training strategy, demonstrating that the framework produces musically accurate and interpretable evaluations suitable for guiding generative model improvement. The code is at [https://github.com/opendilab/VocalCritic](https://github.com/opendilab/VocalCritic)
Problem

Research questions and friction points this paper is trying to address.

Develops a multi-dimensional feedback system for singing synthesis evaluation
Addresses limitations of single-score methods lacking interpretability and detail
Generates both text and audio critiques to guide model improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative multi-modal feedback for singing voice synthesis evaluation
Fine-tuned audio-language model for text and audio critiques
Hybrid dataset combining human reactions and synthetic critiques
🔎 Similar Papers
No similar papers found.
X
Xueyan Li
Shanghai Artificial Intelligence Laboratory
Y
Yuxin Wang
University of Science and Technology of China Shanghai Artificial Intelligence Laboratory
M
Mengjie Jiang
Columbia University
Q
Qingzi Zhu
Shanghai Artificial Intelligence Laboratory
Jiang Zhang
Jiang Zhang
Meta AI
Privacy-preserving machine learningprivacy-enhancing system
Z
Zoey Kim
Independent Researcher
Y
Yazhe Niu
Shanghai Artificial Intelligence Laboratory The Chinese University of Hong Kong