🤖 AI Summary
Existing audio-language models (ALMs) rely on discrete token sequences, constrained by low-bitrate, lossy audio codecs that compromise both audio fidelity and computational efficiency. To address this, we propose the Continuous Audio-Language Model (CALM), which abandons discrete tokenization and instead directly models continuous-time audio frames. CALM employs a large Transformer to encode contextual information, learns compact latent representations via a continuous-audio variational autoencoder (VAE), and utilizes a lightweight MLP decoder augmented with consistency modeling for efficient generation. By eliminating quantization-induced distortion, CALM achieves superior audio fidelity in speech and music synthesis while significantly reducing computational overhead and inference latency. Experiments demonstrate that CALM outperforms state-of-the-art discrete ALMs across multiple audio generation benchmarks, unifying high-quality output, low latency, and computational efficiency.
📝 Abstract
Audio Language Models (ALM) have emerged as the dominant paradigm for speech and music generation by representing audio as sequences of discrete tokens. Yet, unlike text tokens, which are invertible, audio tokens are extracted from lossy codecs with a limited bitrate. As a consequence, increasing audio quality requires generating more tokens, which imposes a trade-off between fidelity and computational cost. We address this issue by studying Continuous Audio Language Models (CALM). These models instantiate a large Transformer backbone that produces a contextual embedding at every timestep. This sequential information then conditions an MLP that generates the next continuous frame of an audio VAE through consistency modeling. By avoiding lossy compression, CALM achieves higher quality at lower computational cost than their discrete counterpart. Experiments on speech and music demonstrate improved efficiency and fidelity over state-of-the-art discrete audio language models, facilitating lightweight, high-quality audio generation. Samples are available at https://continuous-audio-language-models.github.io