SecoustiCodec: Cross-Modal Aligned Streaming Single-Codecbook Speech Codec

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech codecs suffer from impure semantic encoding (retaining paralinguistic attributes such as timbre and emotion) at low bitrates, insufficient semantic completeness and reconstruction fidelity, and lack of native streaming support. To address these issues, this paper proposes a semantic–paralinguistic disentangled, single-codebook streaming speech codec. Methodologically, we integrate variational autoencoders (VAEs) with finite scalar quantization (FSQ), design a contrastive learning–based cross-modal alignment mechanism to align speech and text in a unified frame-level multimodal latent space, and introduce acoustic-constrained multi-stage optimization. Experiments demonstrate state-of-the-art PESQ scores of 1.77 (at 0.27 kbps) and 2.58 (at 1 kbps). The model fully supports real-time streaming, yields purer semantic representations, and produces more natural reconstructions. Code and pretrained models are publicly available.

Technology Category

Application Category

📝 Abstract
Speech codecs serve as a crucial bridge in unifying speech and text language models. Existing codec methods face several challenges in semantic encoding, such as residual paralinguistic information (e.g., timbre, emotion), insufficient semantic completeness, limited reconstruction capability, and lack of support for streaming. To address these challenges, we propose SecoustiCodec, a cross-modal aligned low-bitrate streaming speech codec that disentangles semantic and paralinguistic information in a single-codebook space. To ensure semantic completeness and reconstruction fidelity, paralinguistic encoding is introduced to bridge the information gap between semantic and acoustic encoding. A semantic-only efficient quantization method based on VAE (Variational Autoencoder) and FSQ (Finite Scalar Quantization) is proposed. This approach alleviates the long-tail distribution problem of tokens while maintaining high codebook utilization. A semantic disentanglement method based on contrastive learning is proposed, which aligns text and speech in a joint multimodal frame-level space, effectively removing paralinguistic information from semantic encoding. An acoustic-constrained multi-stage optimization strategy is proposed to ensure robust and stable convergence. Figure~ ef{fig:pesq_kbps_below_2kbps} shows SecoustiCodec achieves SOTA (state-of-the-art) reconstruction quality (PESQ) of 1.77/2.58 at 0.27/1 kbps. The code and model weights for SecoustiCodec will be open-sourced upon the completion of the peer-review process. We've open-sourced SecoustiCodec's demo, code, and model weights.
Problem

Research questions and friction points this paper is trying to address.

Disentangle semantic and paralinguistic info in speech
Improve semantic completeness and reconstruction fidelity
Enable low-bitrate streaming with cross-modal alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-modal aligned single-codebook speech codec
VAE and FSQ based semantic quantization
Contrastive learning for semantic disentanglement
🔎 Similar Papers
No similar papers found.
Chunyu Qiang
Chunyu Qiang
Kuaishou Technology; TJU; CASIA
Speech Synthesis
H
Haoyu Wang
Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
C
Cheng Gong
Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
Tianrui Wang
Tianrui Wang
Tianjin University
Speech Signal Processing
Ruibo Fu
Ruibo Fu
Associate Professor,CASIA
AIGCLMMIntelligent speech interactionDeepfake detection
T
Tao Wang
Institute of Automation Chinese Academy of Sciences, Beijing, China
Ruilong Chen
Ruilong Chen
Kuaishou Technology; NUDT; BUAA
Speech ProcessingComputer Vision
Jiangyan Yi
Jiangyan Yi
Tsinghua University
speech signal processingspeech synthesisfake audio detectioncontinual learning
Zhengqi Wen
Zhengqi Wen
Tshinghua University
LLM
C
Chen Zhang
Kuaishou Technology, Beijing, China
Longbiao Wang
Longbiao Wang
Professor, Tianjin University
Speech ProcessingSpeech recognitionspeaker recognitionacoustic signal processingspeech enhancement
Jianwu Dang
Jianwu Dang
JAIST, Japan / Tianjin Univ., China
Speech Sciencespeech productionEEGdisorder speech
J
Jianhua Tao
Department of Automation, BNRist, Tsinghua University, Beijing, China