🤖 AI Summary
To address the challenges of subjective quality assessment and the inefficiency of purely data-driven models in speech/audio coding, this paper proposes a tightly integrated hybrid neural coding framework that synergistically combines model-driven and data-driven paradigms. Methodologically, it introduces a novel multi-level hybrid architecture that deeply couples psychoacoustic-weighted loss, customized time-frequency domain prediction (TF-Codec/MDCTNet), an LPCNet-based backbone, and a neural post-processing module, trained end-to-end via an autoencoder paradigm. The core contribution lies in systematically bridging the performance gap between classical signal modeling and end-to-end deep learning. Experimental results demonstrate that, at ultra-low bitrates of 1.6–3.2 kbps, the proposed method achieves a P.808 MOS gain of ≥0.5 over baselines, yielding subjective audio quality approaching that of wideband codecs, while increasing computational overhead by less than 15%.
📝 Abstract
This article explores the integration of model-based and data-driven approaches within the realm of neural speech and audio coding systems. It highlights the challenges posed by the subjective evaluation processes of speech and audio codecs and discusses the limitations of purely data-driven approaches, which often require inefficiently large architectures to match the performance of model-based methods. The study presents hybrid systems as a viable solution, offering significant improvements to the performance of conventional codecs through meticulously chosen design enhancements. Specifically, it introduces a neural network-based signal enhancer that is designed to postprocess existing codecs’ output, along with the autoencoder-based end-to-end models and LPCNet–hybrid systems that combine linear predictive coding (LPC) with neural networks. Furthermore, the article delves into predictive models that operate within custom feature spaces (TF-Codec) or predefined transform domains (MDCTNet) and examines the use of psychoacoustically calibrated loss functions to train end-to-end neural audio codecs. Through these investigations, the article demonstrates the potential of hybrid systems to advance the field of speech and audio coding by bridging the gap between traditional model-based approaches and modern data-driven techniques.