Q2D2: A Geometry-Aware Audio Codec Leveraging Two-Dimensional Quantization

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current neural audio codecs employ quantization schemes—such as residual vector quantization (RVQ), vector quantization (VQ), and finite scalar quantization (FSQ)—that suffer from limited geometric modeling capacity in latent space, resulting in weak correlation capture among features, low codebook utilization, and high token rates. To address this, we propose Q2D2, a geometry-aware audio compression framework that, for the first time, jointly quantizes feature pairs onto structured 2D grids (hexagonal, rhombic, or rectangular), implicitly constructing efficient codebooks. This approach abandons the oversimplified manifold assumptions of scalar or vector quantization, enhancing geometric consistency and collaborative feature representation while preserving reconstruction fidelity. Experiments on speech reconstruction demonstrate that Q2D2 matches or surpasses state-of-the-art models in both objective and subjective quality metrics, achieves significantly higher codebook utilization, and validates—through ablation—the critical role of 2D grid-based quantization.

Technology Category

Application Category

📝 Abstract
Recent neural audio codecs have achieved impressive reconstruction quality, typically relying on quantization methods such as Residual Vector Quantization (RVQ), Vector Quantization (VQ) and Finite Scalar Quantization (FSQ). However, these quantization techniques limit the geometric structure of the latent space, make it harder to capture correlations between features leading to inefficiency in representation learning, codebook utilization and token rate. In this paper we introduce Two Dimensional Quantization (Q2D2), a quantization scheme in which feature pairs are projected onto structured 2D grids such as hexagonal, rhombic, or rectangular tiling and quantized to the nearest grid values, yielding an implicit codebook defined by the product of grid levels, with codebook sizes comparable to conventional methods. Despite its simple geometric formulation, Q2D2 improves audio compression efficiency, with low token rates and high codebook utilization while maintaining state of the art reconstruction quality. Specifically, Q2D2 achieves competitive to superior performance in various objective and subjective reconstruction metrics, across extensive experiments in speech domain compared to state of the art models. Comprehensive ablation studies further confirm the effectiveness of our design choices.
Problem

Research questions and friction points this paper is trying to address.

Improves audio compression efficiency with low token rates
Enhances codebook utilization while maintaining reconstruction quality
Captures feature correlations better than traditional quantization methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two Dimensional Quantization projects feature pairs onto structured 2D grids
It uses implicit codebooks from grid products for efficient compression
Q2D2 improves audio compression with low token rates and high codebook utilization
🔎 Similar Papers
No similar papers found.
Eliya Nachmani
Eliya Nachmani
Ben-Gurion University; Google Research
Deep LearningSpeechAudioSignal ProcessingInformation Theory
T
Tal Shuster
Department of Electronics and Computing Engineering, Ben-Gurion University, Israel