🤖 AI Summary
This work addresses the challenge of achieving both high-fidelity reconstruction and computational efficiency in residual vector quantization (RVQ) with a fixed number of codebooks when handling audio signals exhibiting significantly varying complexity. To this end, we propose SwitchCodec, a novel framework based on Residual Expert Vector Quantization (REVQ), which employs a shared base quantizer alongside sparsely activated expert quantizers dynamically routed according to input content. This design decouples bitrate from codebook capacity, enabling variable-bitrate inference without retraining. Experimental results demonstrate that SwitchCodec substantially outperforms existing baselines in both objective metrics and subjective listening quality, effectively enhancing the efficiency of high-fidelity audio compression.
📝 Abstract
Recent neural audio compression models often rely on residual vector quantization for high-fidelity coding, but using a fixed number of per-frame codebooks is suboptimal for the wide variability of audio content-especially for signals that are either very simple or highly complex. To address this limitation, we propose SwitchCodec, a neural audio codec based on Residual Experts Vector Quantization (REVQ). REVQ combines a shared quantizer with dynamically routed expert quantizers that are activated according to the input audio, decoupling bitrate from codebook capacity and improving compression efficiency. This design ensures full training and utilization of each quantizer. In addition, a variable-bitrate mechanism adjusts the number of active expert quantizers at inference, enabling multi-bitrate operation without retraining. Experiments demonstrate that SwitchCodec surpasses existing baselines on both objective metrics and subjective listening tests.