DashengTokenizer: One layer is enough for unified audio understanding and generation

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes DashengTokenizer, a novel audio tokenizer that challenges the conventional “acoustic-then-semantic” paradigm by unifying audio understanding and generation within a single-layer architecture. Unlike traditional approaches that require separate training for understanding and generation tasks and rely on complex variational autoencoder (VAE) frameworks, DashengTokenizer freezes pretrained semantic features and injects acoustic information in a non-VAE manner. This design eliminates the long-standing assumption that VAEs are essential for audio synthesis. Evaluated across 22 diverse tasks, the method significantly outperforms existing encoder and codec baselines, achieving superior performance in speech emotion recognition, music understanding, text-to-audio and text-to-music generation, and speech enhancement.

Technology Category

Application Category

📝 Abstract
This paper introduces DashengTokenizer, a continuous audio tokenizer engineered for joint use in both understanding and generation tasks. Unlike conventional approaches, which train acoustic tokenizers and subsequently integrate frozen semantic knowledge, our method inverts this paradigm: we leverage frozen semantic features and inject acoustic information. In linear evaluation across 22 diverse tasks, our method outperforms previous audio codec and audio encoder baselines by a significant margin while maintaining competitive audio reconstruction quality. Notably, we demonstrate that this acoustic injection improves performance for tasks such as speech emotion recognition, music understanding, and acoustic scene classification. We further evaluate the tokenizer's generative performance on text-to-audio (TTA), text-to-music (TTM), and speech enhancement (SE). Our approach surpasses standard variational autoencoder (VAE)-based methods on TTA and TTM tasks, while its effectiveness on SE underscores its capabilities as a general-purpose audio encoder. Finally, our results challenge the prevailing assumption that VAE-based architectures are a prerequisite for audio synthesis. Checkpoints are available at https://huggingface.co/mispeech/dashengtokenizer.
Problem

Research questions and friction points this paper is trying to address.

audio understanding
audio generation
unified representation
acoustic tokenization
semantic-acoustic integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

audio tokenizer
unified understanding and generation
acoustic injection
frozen semantic features
non-VAE audio synthesis
🔎 Similar Papers
No similar papers found.