🤖 AI Summary
Existing discrete audio tokenization research lacks unified, cross-task and cross-domain evaluation. Method: We systematically survey and benchmark state-of-the-art methods across speech, music, and general audio domains, proposing the first comprehensive taxonomy spanning codec architecture, quantization mechanisms, training paradigms, streaming support, and application dimensions. We design a multi-objective joint optimization framework integrating reconstruction loss, semantic fidelity, and LLM alignment, unifying VQ/RVQ, GAN/MAE, and streaming token generation techniques. Contribution/Results: We conduct horizontal evaluation and controlled ablation studies across 12 standardized benchmarks, identifying critical bottlenecks. We open-source a standardized tokenizer database and core results, establishing—for the first time—the empirical trade-off boundary among reconstruction quality, inference latency, and generalization capability.
📝 Abstract
Discrete audio tokens are compact representations that aim to preserve perceptual quality, phonetic content, and speaker characteristics while enabling efficient storage and inference, as well as competitive performance across diverse downstream tasks.They provide a practical alternative to continuous features, enabling the integration of speech and audio into modern large language models (LLMs). As interest in token-based audio processing grows, various tokenization methods have emerged, and several surveys have reviewed the latest progress in the field. However, existing studies often focus on specific domains or tasks and lack a unified comparison across various benchmarks. This paper presents a systematic review and benchmark of discrete audio tokenizers, covering three domains: speech, music, and general audio. We propose a taxonomy of tokenization approaches based on encoder-decoder, quantization techniques, training paradigm, streamability, and application domains. We evaluate tokenizers on multiple benchmarks for reconstruction, downstream performance, and acoustic language modeling, and analyze trade-offs through controlled ablation studies. Our findings highlight key limitations, practical considerations, and open challenges, providing insight and guidance for future research in this rapidly evolving area. For more information, including our main results and tokenizer database, please refer to our website: https://poonehmousavi.github.io/dates-website/.