Discrete Audio Tokens: More Than a Survey!

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing discrete audio tokenization research lacks unified, cross-task and cross-domain evaluation. Method: We systematically survey and benchmark state-of-the-art methods across speech, music, and general audio domains, proposing the first comprehensive taxonomy spanning codec architecture, quantization mechanisms, training paradigms, streaming support, and application dimensions. We design a multi-objective joint optimization framework integrating reconstruction loss, semantic fidelity, and LLM alignment, unifying VQ/RVQ, GAN/MAE, and streaming token generation techniques. Contribution/Results: We conduct horizontal evaluation and controlled ablation studies across 12 standardized benchmarks, identifying critical bottlenecks. We open-source a standardized tokenizer database and core results, establishing—for the first time—the empirical trade-off boundary among reconstruction quality, inference latency, and generalization capability.

Technology Category

Application Category

📝 Abstract
Discrete audio tokens are compact representations that aim to preserve perceptual quality, phonetic content, and speaker characteristics while enabling efficient storage and inference, as well as competitive performance across diverse downstream tasks.They provide a practical alternative to continuous features, enabling the integration of speech and audio into modern large language models (LLMs). As interest in token-based audio processing grows, various tokenization methods have emerged, and several surveys have reviewed the latest progress in the field. However, existing studies often focus on specific domains or tasks and lack a unified comparison across various benchmarks. This paper presents a systematic review and benchmark of discrete audio tokenizers, covering three domains: speech, music, and general audio. We propose a taxonomy of tokenization approaches based on encoder-decoder, quantization techniques, training paradigm, streamability, and application domains. We evaluate tokenizers on multiple benchmarks for reconstruction, downstream performance, and acoustic language modeling, and analyze trade-offs through controlled ablation studies. Our findings highlight key limitations, practical considerations, and open challenges, providing insight and guidance for future research in this rapidly evolving area. For more information, including our main results and tokenizer database, please refer to our website: https://poonehmousavi.github.io/dates-website/.
Problem

Research questions and friction points this paper is trying to address.

Systematic review and benchmark of discrete audio tokenizers
Evaluate tokenizers on reconstruction and downstream tasks
Analyze trade-offs and highlight open challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic review of discrete audio tokenizers
Taxonomy based on encoder-decoder and quantization
Benchmarking across speech, music, general audio
🔎 Similar Papers
No similar papers found.
Pooneh Mousavi
Pooneh Mousavi
Mila and Concordia University
Conversational AISpeech ProcessingMultimodal Learning
Gallil Maimon
Gallil Maimon
Hebrew University of Jerusalem
Machine LearningArtificial IntelligenceSpeech and Audio ProcessingNatural Language Processing
Adel Moumen
Adel Moumen
University of Cambridge
Deep LearningSpeech
Darius Petermann
Darius Petermann
Applied Scientist @ Amazon AGI | Ex. Apple, Google, Netflix, MERL
sound separationneural audio codingmachine listeningaudio processing
J
Jiatong Shi
Carnegie Mellon University
Haibin Wu
Haibin Wu
Meta
speech processingmulti-modalspeech synthesisLLM
Haici Yang
Haici Yang
Dolby Laboratories, Indiana University Bloomington
Anastasia Kuznetsova
Anastasia Kuznetsova
PhD, Computer Science, Indiana University
Speech and Audio processing
A
Artem Ploujnikov
R
R. Marxer
Université de Toulon
B
B. Ramabhadran
Google
Benjamin Elizalde
Benjamin Elizalde
Apple, Microsoft, Carnegie Mellon University
Machine ListeningAcoustics & SoundAI for Sound
Loren Lugosch
Loren Lugosch
Apple
AudioLanguageComputersArtificial IntelligenceSignal Processing
Jinyu Li
Jinyu Li
Partner Applied Science Manager, Microsoft
Acoustic ModelingSpeech RecognitionSpeech Translation
Cem Subakan
Cem Subakan
Assistant Prof. at Laval University, Computer Science Dept. / Mila, Associate Academic Member
Machine LearningLearning AlgorithmsMachine Learning for Speech and Audio
P
Phil Woodland
University of Cambridge
M
Minje Kim
University of Illinois at Urbana-Champaign
Hung-yi Lee
Hung-yi Lee
National Taiwan University
deep learningspoken language understandingspeech processing
Shinji Watanabe
Shinji Watanabe
Carnegie Mellon University
Speech recognitionSpeech processingSpeech enhancementSpeech translation
Yossi Adi
Yossi Adi
The Hebrew University of Jerusalem
Machine LearningAISpoken Language ModelingAudio Speech and Language Processing
M
M. Ravanelli
Concordia University, Université de Montréal, Mila-Quebec AI Institute