🤖 AI Summary
Existing spoken dialogue systems struggle to achieve controllable yet natural conversational behavior in dynamic contexts. To address this challenge, this work proposes the first open-source, instruction-controllable full-duplex spoken dialogue model. By freezing the audio encoder and fine-tuning only the language model, the approach enables efficient training under resource-constrained conditions. The method employs a single-stage training protocol that substantially reduces both data and computational requirements, achieving explicit control over speaker voice, topic, dialogue acts, and turn initiation with just 2,000 hours of training data. The code and models will be publicly released to support reproducible research in controllable spoken dialogue systems.
📝 Abstract
Spoken conversational systems require more than accurate speech generation to have human-like conversations: to feel natural and engaging, they must produce conversational behaviour that adapts dynamically to the context. Current spoken conversational systems, however, rarely allow such customization, limiting their naturalness and usability. In this work, we present the first open, instruction-following full-duplex conversational speech model that can be trained efficiently under typical academic resource constraints. By keeping the audio encoder frozen and finetuning only the language model, our model requires just 2,000 hours of data, without relying on large-scale pretraining or multi-stage optimization. The model can follow explicit instructions to control speaker voice, conversation topic, conversational behaviour (e.g., backchanneling and interruptions), and dialogue initiation. We propose a single-stage training protocol and systematically analyze design choices. Both the model and training code will be released to enable reproducible research on controllable full-duplex speech systems.