🤖 AI Summary
Existing speech severity assessment models suffer from poor generalizability, often overfitting to dataset-specific acoustic cues, and typically rely on reference speech or text transcriptions—limiting applicability to spontaneous, real-world speech. To address these limitations, we propose SpeechLMScore, a reference-free, pathology-agnostic assessment method grounded in acoustic unit language modeling (AULM) to learn robust, severity-discriminative representations directly from raw speech. To enable comprehensive evaluation, we introduce the NKI-SpeechRT dataset and analyze model robustness via subjective noise ratings. Experiments demonstrate that SpeechLMScore significantly outperforms conventional acoustic feature–based approaches and reference-dependent baselines under noisy conditions. Moreover, it achieves state-of-the-art performance in modeling correlations between speech naturalness and severity—without requiring ground-truth transcriptions, reference utterances, or pathological speech data.
📝 Abstract
Speech severity evaluation is becoming increasingly important as the economic burden of speech disorders grows. Current speech severity models often struggle with generalization, learning dataset-specific acoustic cues rather than meaningful correlates of speech severity. Furthermore, many models require reference speech or a transcript, limiting their applicability in ecologically valid scenarios, such as spontaneous speech evaluation. Previous research indicated that automatic speech naturalness evaluation scores correlate strongly with severity evaluation scores, leading us to explore a reference-free method, SpeechLMScore, which does not rely on pathological speech data. Additionally, we present the NKI-SpeechRT dataset, based on the NKI-CCRT dataset, to provide a more comprehensive foundation for speech severity evaluation. This study evaluates whether SpeechLMScore outperforms traditional acoustic feature-based approaches and assesses the performance gap between reference-free and reference-based models. Moreover, we examine the impact of noise on these models by utilizing subjective noise ratings in the NKI-SpeechRT dataset. The results demonstrate that SpeechLMScore is robust to noise and offers superior performance compared to traditional approaches.