๐ค AI Summary
Semantic relevance and acoustic fidelity in speech representations are often mutually exclusive. This paper proposes a hierarchical disentangled self-supervised speech coding framework: a top-level codebook models discrete semantic tokens, while subsequent codebooks progressively encode acoustic residuals, thereby explicitly decoupling semantic and acoustic features. The method integrates vector quantization, multi-level codebook modeling, and an efficient encoder architecture, achieving both high ASR accuracy and significantly improved speech reconstruction quality. Experiments demonstrate that, compared to SpeechTokenizer, the approach reduces word error rate (WER) by 44% relatively, substantially improves mean opinion score (MOS) for speech reconstruction, and cuts bit rate by 50% (achieving 2ร compression). To our knowledge, this is the first unified framework that simultaneously attains high recognition robustness and high-fidelity speech reconstruction.
๐ Abstract
Effective speech representations for spoken language models must balance semantic relevance with acoustic fidelity for high-quality reconstruction. However, existing approaches struggle to achieve both simultaneously. To address this, we introduce Hierarchical Acoustic and Semantic Representation Disentanglement (HASRD, pronounced `hazard'), a framework that factorizes self-supervised learning representations into discrete semantic and acoustic tokens. HASRD assigns the semantic representation to the first codebook, while encoding acoustic residuals in subsequent codebooks. This preserves ASR performance while achieving high-quality reconstruction. Additionally, we enhance HASRD's encoder efficiency, improving ASR performance without compromising reconstruction quality. Compared to SpeechTokenizer, HASRD achieves a 44% relative WER improvement, superior reconstruction quality, and 2x lower bitrate, demonstrating its effectiveness in disentangling acoustic and semantic information.