🤖 AI Summary
This work addresses the limitations of existing reinforcement learning approaches that rely on a single semantic reward, which struggle to holistically optimize the multidimensional and multimodal dialogue quality in spoken dialogue systems and are incompatible with incremental full-duplex response mechanisms. To overcome these challenges, we propose the first multi-reward RLAIF framework tailored for spoken dialogue systems, integrating three distinct rewards—semantic accuracy, audio quality, and emotional consistency. Our approach introduces turn-level preference sampling and a chunk-level DPO (Direct Preference Optimization) alignment strategy to jointly optimize multiple quality dimensions. We also construct the first multi-reward DPO dataset designed for incremental full-duplex spoken dialogue. Experimental results demonstrate that joint training with multiple rewards significantly outperforms single-reward methods in both semantic fidelity and audio naturalness, underscoring the critical role of multidimensional alignment in practical spoken dialogue systems.
📝 Abstract
Reinforcement learning from human or AI feedback (RLHF/RLAIF) for speech-in/speech-out dialogue systems (SDS) remains underexplored, with prior work largely limited to single semantic rewards applied at the utterance level. Such setups overlook the multi-dimensional and multi-modal nature of conversational quality, which encompasses semantic coherence, audio naturalness, speaker consistency, emotion alignment, and turn-taking behavior. Moreover, they are fundamentally mismatched with duplex spoken dialogue systems that generate responses incrementally, where agents must make decisions based on partial utterances. We address these limitations with the first multi-reward RLAIF framework for SDS, combining semantic, audio-quality, and emotion-consistency rewards. To align utterance-level preferences with incremental, blockwise decoding in duplex models, we apply turn-level preference sampling and aggregate per-block log-probabilities within a single DPO objective. We present the first systematic study of preference learning for improving SDS quality in both multi-turn Chain-of-Thought and blockwise duplex models, and release a multi-reward DPO dataset to support reproducible research. Experiments show that single-reward RLAIF selectively improves its targeted metric, while joint multi-reward training yields consistent gains across semantic quality and audio naturalness. These results highlight the importance of holistic, multi-reward alignment for practical conversational SDS.