Neural Speech and Audio Coding: Modern AI technology meets traditional codecs [Special Issue On Model-Based and Data-Driven Audio Signal Processing]

📅 2024-08-13
🏛️ IEEE Signal Processing Magazine
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of subjective quality assessment and the inefficiency of purely data-driven models in speech/audio coding, this paper proposes a tightly integrated hybrid neural coding framework that synergistically combines model-driven and data-driven paradigms. Methodologically, it introduces a novel multi-level hybrid architecture that deeply couples psychoacoustic-weighted loss, customized time-frequency domain prediction (TF-Codec/MDCTNet), an LPCNet-based backbone, and a neural post-processing module, trained end-to-end via an autoencoder paradigm. The core contribution lies in systematically bridging the performance gap between classical signal modeling and end-to-end deep learning. Experimental results demonstrate that, at ultra-low bitrates of 1.6–3.2 kbps, the proposed method achieves a P.808 MOS gain of ≥0.5 over baselines, yielding subjective audio quality approaching that of wideband codecs, while increasing computational overhead by less than 15%.

Technology Category

Application Category

📝 Abstract
This article explores the integration of model-based and data-driven approaches within the realm of neural speech and audio coding systems. It highlights the challenges posed by the subjective evaluation processes of speech and audio codecs and discusses the limitations of purely data-driven approaches, which often require inefficiently large architectures to match the performance of model-based methods. The study presents hybrid systems as a viable solution, offering significant improvements to the performance of conventional codecs through meticulously chosen design enhancements. Specifically, it introduces a neural network-based signal enhancer that is designed to postprocess existing codecs’ output, along with the autoencoder-based end-to-end models and LPCNet–hybrid systems that combine linear predictive coding (LPC) with neural networks. Furthermore, the article delves into predictive models that operate within custom feature spaces (TF-Codec) or predefined transform domains (MDCTNet) and examines the use of psychoacoustically calibrated loss functions to train end-to-end neural audio codecs. Through these investigations, the article demonstrates the potential of hybrid systems to advance the field of speech and audio coding by bridging the gap between traditional model-based approaches and modern data-driven techniques.
Problem

Research questions and friction points this paper is trying to address.

Neural Voice and Audio Coding
Quality Evaluation
Efficiency Improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid systems
Neural networks
Psychoacoustic loss function
🔎 Similar Papers
No similar papers found.
M
Minje Kim
University of Illinois at Urbana-Champaign
Jan Skoglund
Jan Skoglund
Google, LLC
Speech processingAudio processingSignal processing