🤖 AI Summary
Prior to this work, no end-to-end full-duplex spoken dialogue system existed for Japanese, leaving a critical gap in modeling natural conversational phenomena such as speech overlap and backchanneling. Method: We propose the first end-to-end full-duplex Japanese spoken dialogue system, built upon the Moshi architecture. Our approach integrates large-scale Japanese spoken-language pretraining, high-quality stereo dialogue fine-tuning, and a novel two-stage training paradigm augmented with multi-stream TTS-synthesized data. Contribution/Results: We publicly release the first open-source full-duplex Japanese dialogue model. Experiments demonstrate substantial improvements over existing Japanese baselines in both speech naturalness and semantic coherence, effectively bridging the technical gap in Japanese full-duplex conversational modeling.
📝 Abstract
Full-duplex spoken dialogue systems, which can model simultaneous bidirectional features of human conversations such as speech overlaps and backchannels, have attracted significant attention recently. However, the study of full-duplex spoken dialogue systems for the Japanese language has been limited, and the research on their development in Japanese remains scarce. In this paper, we present the first publicly available full-duplex spoken dialogue model in Japanese, which is built upon Moshi, a full-duplex dialogue model in English. Our model is trained through a two-stage process: pre-training on a large-scale spoken dialogue data in Japanese, followed by fine-tuning on high-quality stereo spoken dialogue data. We further enhance the model's performance by incorporating synthetic dialogue data generated by a multi-stream text-to-speech system. Evaluation experiments demonstrate that the trained model outperforms Japanese baseline models in both naturalness and meaningfulness.