Universal Speech Token Learning via Low-Bitrate Neural Codec and Pretrained Representations

📅 2024-12-01
🏛️ IEEE Journal on Selected Topics in Signal Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current speech-language models decouple semantic and acoustic modeling: semantic tokens discard paralinguistic information (e.g., emotion, prosody), while acoustic synthesis relies heavily on prompts, suffering from poor cross-domain robustness and weak long-term consistency. To address this, we propose UniCodec—the first framework to jointly model semantic and paralinguistic information within a unified discrete token space. Leveraging a low-bitrate neural codec, self-supervised representation distillation, multi-scale discretization, and explicit semantic–paralinguistic disentanglement, UniCodec learns compact, disentangled, and information-rich universal speech tokens encoding both linguistic and paralinguistic attributes. This breaks the conventional two-stage paradigm, significantly improving naturalness, expressiveness, and paralinguistic fidelity across multilingual understanding and generation tasks, while simultaneously enhancing cross-domain robustness and long-term temporal coherence.

Technology Category

Application Category

📝 Abstract
Current large speech language models are mainly based on semantic tokens from discretization of self-supervised learned representations and acoustic tokens from a neural codec, following a semantic-modeling and acoustic-synthesis paradigm. However, semantic tokens discard paralinguistic attributes of speakers that is important for natural spoken communication, while prompt-based acoustic synthesis from semantic tokens has limits in recovering paralinguistic details and suffers from robustness issues, especially when there are domain gaps between the prompt and the target. This paper unifies two types of tokens and proposes the UniCodec, a universal speech token learning that encapsulates all semantics of speech, including linguistic and paralinguistic information, into a compact and semantically-disentangled unified token. Such a unified token can not only benefit speech language models in understanding with paralinguistic hints but also help speech generation with high-quality output. A low-bitrate neural codec is leveraged to learn such disentangled discrete representations at global and local scales, with knowledge distilled from self-supervised learned features. Extensive evaluations on multilingual datasets demonstrate its effectiveness in generating natural, expressive and long-term consistent output quality with paralinguistic attributes well preserved in several speech processing tasks.
Problem

Research questions and friction points this paper is trying to address.

Unifies semantic and acoustic tokens for speech processing
Preserves paralinguistic attributes in speech communication
Enhances speech generation quality and robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified token combines linguistic and paralinguistic information
Low-bitrate neural codec for compact speech representation
Knowledge distillation from self-supervised learned features
🔎 Similar Papers
No similar papers found.
X
Xue Jiang
School of Information and Communication Engineering, Communication University of China, Beijing 100024, China
Xiulian Peng
Xiulian Peng
Researcher at Microsoft Research Asia
deep learningaudio and speechcomputer visionreal-time communicationimage/video coding
Y
Yuan Zhang
State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China
Y
Yan Lu
Microsoft Research Asia, Beijing 100080, China