🤖 AI Summary
Current synthetic speech still falls significantly short of human speech in naturalness and lacks an interpretable, highly generalizable evaluation framework. To address this, this work proposes the Generative Speech Reward Model (GSRM), which introduces generative reward modeling and chain-of-thought reasoning into speech naturalness assessment for the first time. By leveraging interpretable acoustic feature extraction and feature-based reasoning chains, GSRM enables fine-grained and explainable quality judgments. The model achieves human-level consistency in naturalness rating prediction, demonstrating significantly higher model-human correlation than existing approaches. Furthermore, GSRM effectively supports online reinforcement learning from human feedback (RLHF) for large speech generative models, enhancing the naturalness of synthesized speech and generalizing well to cross-domain spoken interaction scenarios.
📝 Abstract
Recent advances in speech language models, such as GPT-4o Voice Mode and Gemini Live, have demonstrated promising speech generation capabilities. Nevertheless, the aesthetic naturalness of the synthesized audio still lags behind that of human speech. Enhancing generation quality requires a reliable evaluator of speech naturalness. However, existing naturalness evaluators typically regress raw audio to scalar scores, offering limited interpretability of the evaluation and moreover fail to generalize to speech across different taxonomies. Inspired by recent advances in generative reward modeling, we propose the Generative Speech Reward Model (GSRM), a reasoning-centric reward model tailored for speech. The GSRM is trained to decompose speech naturalness evaluation into an interpretable acoustic feature extraction stage followed by feature-grounded chain-of-thought reasoning, enabling explainable judgments. To achieve this, we curated a large-scale human feedback dataset comprising 31k expert ratings and an out-of-domain benchmark of real-world user-assistant speech interactions. Experiments show that GSRM substantially outperforms existing speech naturalness predictors, achieving model-human correlation of naturalness score prediction that approaches human inter-rater consistency. We further show how GSRM can improve the naturalness of speech LLM generations by serving as an effective verifier for online RLHF.