Reconstruct! Don't Encode: Self-Supervised Representation Reconstruction Loss for High-Intelligibility and Low-Latency Streaming Neural Audio Codec

πŸ“… 2026-03-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that existing neural audio codecs struggle to balance speech intelligibility with Mel-spectrogram reconstruction fidelity, while semantic distillation approaches fail to ensure content preservation. To overcome this, we propose a self-supervised representation reconstruction (SSRR) loss, which is introduced for the first time into codec training by directly reconstructing self-supervised representations from decoded speech. This approach significantly enhances intelligibility and accelerates convergence. Integrated with a zero-lookahead streaming Transformer architecture, our method enables low-latency real-time deployment. The resulting codec, JHCodec, achieves state-of-the-art performance in both speech intelligibility and overall quality under single-GPU training conditions, and we publicly release the complete implementation and training pipeline.

Technology Category

Application Category

πŸ“ Abstract
Neural audio codecs optimized for mel-spectrogram reconstruction often fail to preserve intelligibility. While semantic encoder distillation improves encoded representations, it does not guarantee content preservation in reconstructed speech. In this work, we demonstrate that self-supervised representation reconstruction (SSRR) loss fundamentally improves codec training and performance. First, SSRR significantly accelerates convergence, enabling competitive results using only a single GPU. Second, it enhances intelligibility by reconstructing distilled self-supervised representations from codec outputs. Third, SSRR enables high intelligibility without additional lookahead in streaming Transformer-based codecs, allowing a zero-lookahead architecture for real-time deployment. As a result, our JHCodec achieves state-of-the-art performance while maintaining minimal latency and reduced training cost. We open-source the full implementation, training pipeline, and demo on Github https://github.com/jhcodec843/jhcodec.
Problem

Research questions and friction points this paper is trying to address.

neural audio codec
speech intelligibility
low-latency streaming
representation reconstruction
self-supervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Supervised Representation Reconstruction
Neural Audio Codec
Zero-Lookahead Streaming
Intelligibility Enhancement
Low-Latency Speech Coding
πŸ”Ž Similar Papers
No similar papers found.