MagiCodec: Simple Masked Gaussian-Injected Codec for High-Fidelity Reconstruction and Generation

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural audio codecs prioritize reconstruction fidelity at the expense of semantic interpretability and modeling capability of discrete tokens, limiting their applicability in LLM-driven generative tasks. To address this, we propose MagiCodec: a single-layer streaming Transformer-based audio codec that introduces, for the first time in audio coding, a frequency-domain interpretable Gaussian noise injection mechanism, jointly optimized with latent-space regularization to balance reconstruction accuracy and semantic expressiveness. This design encourages discrete tokens to follow a Zipf distribution, substantially improving compatibility with large language models. Through multi-stage training and discrete quantization, MagiCodec achieves state-of-the-art performance across objective metrics—including LSD and MCD—as well as downstream tasks such as text-to-speech and audio synthesis. It establishes an efficient, semantically grounded representation for joint audio–language modeling.

Technology Category

Application Category

📝 Abstract
Neural audio codecs have made significant strides in efficiently mapping raw audio waveforms into discrete token representations, which are foundational for contemporary audio generative models. However, most existing codecs are optimized primarily for reconstruction quality, often at the expense of the downstream modelability of the encoded tokens. Motivated by the need to overcome this bottleneck, we introduce $ extbf{MagiCodec}$, a novel single-layer, streaming Transformer-based audio codec. MagiCodec is designed with a multistage training pipeline that incorporates Gaussian noise injection and latent regularization, explicitly targeting the enhancement of semantic expressiveness in the generated codes while preserving high reconstruction fidelity. We analytically derive the effect of noise injection in the frequency domain, demonstrating its efficacy in attenuating high-frequency components and fostering robust tokenization. Extensive experimental evaluations show that MagiCodec surpasses state-of-the-art codecs in both reconstruction quality and downstream tasks. Notably, the tokens produced by MagiCodec exhibit Zipf-like distributions, as observed in natural languages, thereby improving compatibility with language-model-based generative architectures. The code and pre-trained models are available at https://github.com/Ereboas/MagiCodec.
Problem

Research questions and friction points this paper is trying to address.

Enhancing semantic expressiveness in audio codecs
Balancing reconstruction quality and token modelability
Improving compatibility with language-model architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Single-layer streaming Transformer-based audio codec
Gaussian noise injection for robust tokenization
Multistage training with latent regularization
🔎 Similar Papers
No similar papers found.
Y
Yakun Song
Shanghai Jiao Tong University, Bytedance Inc.
J
Jiawei Chen
Bytedance Inc.
Xiaobin Zhuang
Xiaobin Zhuang
Bytedance
Audio Generation
Chenpeng Du
Chenpeng Du
ByteDance
Speech Interaction
Z
Ziyang Ma
Shanghai Jiao Tong University, Bytedance Inc.
J
Jian Wu
Bytedance Inc.
Jian Cong
Jian Cong
ByteDance Seed
speech
Dongya Jia
Dongya Jia
ByteDance Seed
Generative ModelLLMAudio Generation
Z
Zhuo Chen
Bytedance Inc.
Y
Yuping Wang
Bytedance Inc.
Y
Yuxuan Wang
Bytedance Inc.
X
Xie Chen
Shanghai Jiao Tong University