🤖 AI Summary
Existing speech-language models rely on Residual Vector Quantization (RVQ), introducing discretization errors and requiring complex hierarchical architectures, thereby compromising speech fidelity and inference efficiency. This paper proposes SLED: an end-to-end speech-language model operating in a continuous latent space, which directly encodes raw waveforms into continuous latent sequences. Crucially, SLED is the first to adopt energy distance as the autoregressive modeling objective, enabling continuous distribution matching and eliminating RVQ entirely—thereby avoiding quantization distortion and architectural redundancy. Combining conceptual simplicity with strong representational capacity, SLED achieves significant improvements over baselines in zero-shot and streaming speech synthesis tasks. It simultaneously delivers high modeling accuracy and real-time inference capability. Experimental results validate the effectiveness and feasibility of continuous latent-space modeling for general-purpose speech-language models.
📝 Abstract
We introduce SLED, an alternative approach to speech language modeling by encoding speech waveforms into sequences of continuous latent representations and modeling them autoregressively using an energy distance objective. The energy distance offers an analytical measure of the distributional gap by contrasting simulated and target samples, enabling efficient training to capture the underlying continuous autoregressive distribution. By bypassing reliance on residual vector quantization, SLED avoids discretization errors and eliminates the need for the complicated hierarchical architectures common in existing speech language models. It simplifies the overall modeling pipeline while preserving the richness of speech information and maintaining inference efficiency. Empirical results demonstrate that SLED achieves strong performance in both zero-shot and streaming speech synthesis, showing its potential for broader applications in general-purpose speech language models.