🤖 AI Summary
This work addresses the alignment challenge between text monologues and audio streams arising from sampling-rate discrepancies. It proposes the “Natural Monologue” modeling paradigm, abandoning word-level alignment reliant on high-precision token timestamps to eliminate cascaded errors and preprocessing overhead. Methodologically, it introduces a forward-backward alternating two-stage training strategy, enabling end-to-end multi-channel synchronization by directly modeling listen-speak coordination over continuous text sequences in a 7B-parameter speech dialogue model. Key contributions are: (1) replacing forced temporal alignment with natural speech rhythm to emulate human cognitive dialogue behavior; and (2) enhancing temporal understanding and generation coordination via dual-stage training. Experiments demonstrate significantly reduced response latency, improved duplex interaction coherence, and superior subjective user experience—advancing full-duplex dialogue systems toward practical deployment.
📝 Abstract
Full-duplex dialog models are designed to listen and speak simultaneously with rapid responses to fast-changing user input. Among existing approaches, native full-duplex models merges different channels (e.g. listen and speak) in a single time step, overcoming the high response latency inherent to time-division multiplexing time-division multiplexing (TDM) alternatives. Yet, a key challenge remains: aligning textual monologues with audio streams that operate at different bitrates. The prevailing solution relies on word-level alignment, but this can degrade the language ability of large pre-trained models. Moreover, it requires highly accurate timestamps for every token, which introduces cascading errors and increases pre-processing costs. In this paper, we propose textual monologues in continuous tokens sequence, namely "natural" monologues, which mimics humanoid cognitive behavior in dialogs. For temporal alignment, we alternate the position of the natural monologue - leading or trailing the audio - across different training stages. This "dual" training paradigm proves highly effective in building FLM-Audio, our 7B spoken dialog model that demonstrates superior responsiveness, duplexity, and chatting experiences, as confirmed by experimental results.