CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language Models

📅 2024-12-13
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly optimizing low latency, high naturalness, and cross-modal consistency in real-time interactive speech synthesis, this paper introduces CosyVoice 2—the first streaming multilingual text-to-speech (TTS) system directly driven by large language models (LLMs). Methodologically, we propose finite scalar quantization to enhance codebook utilization; design a lightweight text-to-speech LLM backbone; introduce a block-aware causal flow matching model that unifies streaming and non-streaming synthesis; and construct a multilingual discrete speech token representation. Trained on large-scale multilingual data, CosyVoice 2 achieves millisecond-level end-to-end latency, human-level naturalness (MOS ≥ 4.4), and near-lossless waveform reconstruction—significantly outperforming existing streaming TTS systems in both quality and efficiency.

Technology Category

Application Category

📝 Abstract
In our previous work, we introduced CosyVoice, a multilingual speech synthesis model based on supervised discrete speech tokens. By employing progressive semantic decoding with two popular generative models, language models (LMs) and Flow Matching, CosyVoice demonstrated high prosody naturalness, content consistency, and speaker similarity in speech in-context learning. Recently, significant progress has been made in multi-modal large language models (LLMs), where the response latency and real-time factor of speech synthesis play a crucial role in the interactive experience. Therefore, in this report, we present an improved streaming speech synthesis model, CosyVoice 2, which incorporates comprehensive and systematic optimizations. Specifically, we introduce finite-scalar quantization to improve the codebook utilization of speech tokens. For the text-speech LM, we streamline the model architecture to allow direct use of a pre-trained LLM as the backbone. In addition, we develop a chunk-aware causal flow matching model to support various synthesis scenarios, enabling both streaming and non-streaming synthesis within a single model. By training on a large-scale multilingual dataset, CosyVoice 2 achieves human-parity naturalness, minimal response latency, and virtually lossless synthesis quality in the streaming mode. We invite readers to listen to the demos at https://funaudiollm.github.io/cosyvoice2.
Problem

Research questions and friction points this paper is trying to address.

multilingual speech synthesis
natural and efficient voice generation
large-scale language model compatibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

efficient voice code utilization
large-scale pre-trained language models
multi-language near-human naturalness
🔎 Similar Papers
No similar papers found.
Zhihao Du
Zhihao Du
Alibaba
Speech separationspeech enchancementspeaker diarization
Y
Yuxuan Wang
Alibaba Group, China
Q
Qian Chen
Alibaba Group, China
Xian Shi
Xian Shi
Qwen Team, Alibaba
speech recognitionaudio LLMOmni
X
Xiang Lv
Alibaba Group, China
T
Tianyu Zhao
Alibaba Group, China
Z
Zhifu Gao
Alibaba Group, China
Yexin Yang
Yexin Yang
Shanghai Jiao Tong University
Speaker VerificationSpeech ProcessingDeep LearningMachine Learning
C
Changfeng Gao
Alibaba Group, China
H
Hui Wang
Alibaba Group, China
F
Fan Yu
Alibaba Group, China
H
Huadai Liu
Alibaba Group, China
Zhengyan Sheng
Zhengyan Sheng
University of Science and Technology of China
Speech SynthesisMultimodality-driven Speaker Generation
Yue Gu
Yue Gu
Alibaba Group, China
Chong Deng
Chong Deng
alibaba group
machine learningnatural language processing
W
Wen Wang
Alibaba Group, China
Shiliang Zhang
Shiliang Zhang
Department of Computer Science, School of EECS, Peking University
Multimedia Information RetrievalMultimedia SystemsVisual Search
Z
Zhijie Yan
Alibaba Group, China
J
Jing-Ru Zhou
Alibaba Group, China