π€ AI Summary
This work addresses the challenge that existing neural audio codecs struggle to balance speech intelligibility with Mel-spectrogram reconstruction fidelity, while semantic distillation approaches fail to ensure content preservation. To overcome this, we propose a self-supervised representation reconstruction (SSRR) loss, which is introduced for the first time into codec training by directly reconstructing self-supervised representations from decoded speech. This approach significantly enhances intelligibility and accelerates convergence. Integrated with a zero-lookahead streaming Transformer architecture, our method enables low-latency real-time deployment. The resulting codec, JHCodec, achieves state-of-the-art performance in both speech intelligibility and overall quality under single-GPU training conditions, and we publicly release the complete implementation and training pipeline.
π Abstract
Neural audio codecs optimized for mel-spectrogram reconstruction often fail to preserve intelligibility. While semantic encoder distillation improves encoded representations, it does not guarantee content preservation in reconstructed speech. In this work, we demonstrate that self-supervised representation reconstruction (SSRR) loss fundamentally improves codec training and performance. First, SSRR significantly accelerates convergence, enabling competitive results using only a single GPU. Second, it enhances intelligibility by reconstructing distilled self-supervised representations from codec outputs. Third, SSRR enables high intelligibility without additional lookahead in streaming Transformer-based codecs, allowing a zero-lookahead architecture for real-time deployment. As a result, our JHCodec achieves state-of-the-art performance while maintaining minimal latency and reduced training cost. We open-source the full implementation, training pipeline, and demo on Github https://github.com/jhcodec843/jhcodec.