OpenACE: An Open Benchmark for Evaluating Audio Coding Performance

📅 2024-09-12
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The audio/speech coding community has long lacked a unified, open-source, and reproducible evaluation benchmark; existing evaluations rely on proprietary or small-scale datasets, leading to unfair and non-reproducible comparisons between traditional DSP-based and machine learning (ML)-based codecs. Method: We introduce AudioBench—the first open-source, full-bandwidth, content-diverse audio coding quality benchmark—featuring standardized test vectors, support for emerging scenarios (e.g., emotional speech, LE Audio with LC3/LC3+), and integration of mainstream codecs (Opus, EVS, LC3) with objective metrics (PESQ, VISQOL) and subjective listening tests. Contribution/Results: AudioBench enables the first fair, standardized comparison between DSP and ML codecs. Experiments uncover significant quality degradation in emotional speech coding at 16 kbps. The benchmark is publicly released, widely adopted by the community, and facilitates robust cross-algorithm and cross-distribution evaluation.

Technology Category

Application Category

📝 Abstract
Audio and speech coding lack unified evaluation and open-source testing. Many candidate systems were evaluated on proprietary, non-reproducible, or small data, and machine learning-based codecs are often tested on datasets with similar distributions as trained on, which is unfairly compared to digital signal processing-based codecs that usually work well with unseen data. This paper presents a full-band audio and speech coding quality benchmark with more variable content types, including traditional open test vectors. An example use case of audio coding quality assessment is presented with open-source Opus, 3GPP's EVS, and recent ETSI's LC3 with LC3+ used in Bluetooth LE Audio profiles. Besides, quality variations of emotional speech encoding at 16 kbps are shown. The proposed open-source benchmark contributes to audio and speech coding democratization and is available at https://github.com/JozefColdenhoff/OpenACE.
Problem

Research questions and friction points this paper is trying to address.

Lack unified evaluation for audio and speech coding
Proprietary or biased datasets limit fair comparisons
Need open benchmark for diverse content testing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source benchmark for diverse audio coding evaluation
Includes traditional and variable content test vectors
Compares multiple codecs like Opus, EVS, LC3
🔎 Similar Papers