🤖 AI Summary
Existing speech codecs suffer from impure semantic encoding (retaining paralinguistic attributes such as timbre and emotion) at low bitrates, insufficient semantic completeness and reconstruction fidelity, and lack of native streaming support. To address these issues, this paper proposes a semantic–paralinguistic disentangled, single-codebook streaming speech codec. Methodologically, we integrate variational autoencoders (VAEs) with finite scalar quantization (FSQ), design a contrastive learning–based cross-modal alignment mechanism to align speech and text in a unified frame-level multimodal latent space, and introduce acoustic-constrained multi-stage optimization. Experiments demonstrate state-of-the-art PESQ scores of 1.77 (at 0.27 kbps) and 2.58 (at 1 kbps). The model fully supports real-time streaming, yields purer semantic representations, and produces more natural reconstructions. Code and pretrained models are publicly available.
📝 Abstract
Speech codecs serve as a crucial bridge in unifying speech and text language models. Existing codec methods face several challenges in semantic encoding, such as residual paralinguistic information (e.g., timbre, emotion), insufficient semantic completeness, limited reconstruction capability, and lack of support for streaming. To address these challenges, we propose SecoustiCodec, a cross-modal aligned low-bitrate streaming speech codec that disentangles semantic and paralinguistic information in a single-codebook space. To ensure semantic completeness and reconstruction fidelity, paralinguistic encoding is introduced to bridge the information gap between semantic and acoustic encoding. A semantic-only efficient quantization method based on VAE (Variational Autoencoder) and FSQ (Finite Scalar Quantization) is proposed. This approach alleviates the long-tail distribution problem of tokens while maintaining high codebook utilization. A semantic disentanglement method based on contrastive learning is proposed, which aligns text and speech in a joint multimodal frame-level space, effectively removing paralinguistic information from semantic encoding. An acoustic-constrained multi-stage optimization strategy is proposed to ensure robust and stable convergence. Figure~
ef{fig:pesq_kbps_below_2kbps} shows SecoustiCodec achieves SOTA (state-of-the-art) reconstruction quality (PESQ) of 1.77/2.58 at 0.27/1 kbps. The code and model weights for SecoustiCodec will be open-sourced upon the completion of the peer-review process. We've open-sourced SecoustiCodec's demo, code, and model weights.