Efficient Speech Language Modeling via Energy Distance in Continuous Latent Space

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speech-language models rely on Residual Vector Quantization (RVQ), introducing discretization errors and requiring complex hierarchical architectures, thereby compromising speech fidelity and inference efficiency. This paper proposes SLED: an end-to-end speech-language model operating in a continuous latent space, which directly encodes raw waveforms into continuous latent sequences. Crucially, SLED is the first to adopt energy distance as the autoregressive modeling objective, enabling continuous distribution matching and eliminating RVQ entirely—thereby avoiding quantization distortion and architectural redundancy. Combining conceptual simplicity with strong representational capacity, SLED achieves significant improvements over baselines in zero-shot and streaming speech synthesis tasks. It simultaneously delivers high modeling accuracy and real-time inference capability. Experimental results validate the effectiveness and feasibility of continuous latent-space modeling for general-purpose speech-language models.

Technology Category

Application Category

📝 Abstract
We introduce SLED, an alternative approach to speech language modeling by encoding speech waveforms into sequences of continuous latent representations and modeling them autoregressively using an energy distance objective. The energy distance offers an analytical measure of the distributional gap by contrasting simulated and target samples, enabling efficient training to capture the underlying continuous autoregressive distribution. By bypassing reliance on residual vector quantization, SLED avoids discretization errors and eliminates the need for the complicated hierarchical architectures common in existing speech language models. It simplifies the overall modeling pipeline while preserving the richness of speech information and maintaining inference efficiency. Empirical results demonstrate that SLED achieves strong performance in both zero-shot and streaming speech synthesis, showing its potential for broader applications in general-purpose speech language models.
Problem

Research questions and friction points this paper is trying to address.

Modeling speech in continuous latent space efficiently
Avoiding discretization errors in speech language models
Simplifying architecture while preserving speech information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encodes speech into continuous latent representations
Uses energy distance for efficient training
Avoids discretization errors and complex hierarchies
🔎 Similar Papers
No similar papers found.
Zhengrui Ma
Zhengrui Ma
Institute of Computing Technology, Chinese Academy of Sciences
Language Modeling
Y
Yang Feng
Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Pattern Recognition Center, WeChat AI, Tencent Inc
Chenze Shao
Chenze Shao
Tencent
Machine TranslationNatural Language ProcessingDeep Learning
Fandong Meng
Fandong Meng
WeChat AI, Tencent
Machine TranslationNatural Language Processing
J
Jie Zhou
Pattern Recognition Center, WeChat AI, Tencent Inc
M
Min Zhang
School of Future Science and Engineering, Soochow University