🤖 AI Summary
Current speech assessment faces two key bottlenecks: reliance on handcrafted audio features in task-specific systems and low correlation between automatic scoring and human preferences. This paper introduces AudioJudge—the first unified speech evaluation framework built upon large audio foundation models. Our method decouples assessment into three specialized discriminative modules: lexical content, audio quality, and paralinguistic features; employs a novel prompting strategy integrating audio stitching and in-context learning; and establishes a systematic, multi-task evaluation benchmark. On system-level ranking benchmarks, AudioJudge achieves a Spearman correlation coefficient of 0.91 with human judgments—significantly outperforming prior approaches—and demonstrates strong robustness under noisy conditions. To our knowledge, AudioJudge is the first framework to simultaneously achieve high interpretability, high consistency, and high generalizability in automatic speech assessment.
📝 Abstract
Current speech evaluation suffers from two critical limitations: the need and difficulty of designing specialized systems targeting individual audio characteristics, and poor correlation between automatic evaluation methods and human preferences. This work presents a systematic study of Large Audio Model (LAM) as a Judge, AudioJudge, investigating whether it can provide a unified evaluation framework that addresses both challenges. We systematically explore AudioJudge across audio characteristic detection tasks, including pronunciation, speaking rate, speaker identification and speech quality, and system-level human preference simulation for automated benchmarking. We investigate different prompt engineering strategies, finding that audio concatenation combined with in-context learning significantly improves performance across both audio characteristic detection and human preference simulation tasks. We further introduce a multi-aspect ensemble AudioJudge to enable general-purpose multi-aspect audio evaluation. This method decomposes speech assessment into specialized judges for lexical content, speech quality, and paralinguistic features, achieving up to 0.91 Spearman correlation with human preferences on our system ranking benchmark. Robustness analysis reveals that while LAMs maintain strong performance under acoustic noise, they exhibit significant verbosity and positional biases that require careful mitigation.