🤖 AI Summary
Existing speech foundation models lack systematic evaluation of vocal quality variations—such as hoarseness, breathiness, and creakiness—which constitute critical paralinguistic cues influencing emotional and social inference. Method: This study conducts the first systematic investigation into model sensitivity to vocal timbre, introducing a novel parallel speech dataset with synthetically modified vocal qualities to overcome limitations of conventional multiple-choice benchmarks; it employs open-ended generation, fine-grained emotion recognition, and contrastive analysis to assess response consistency across vocal conditions. Contribution/Results: We demonstrate significant behavioral inconsistency in mainstream speech foundation models under varying vocal qualities, revealing substantial performance degradation and inconsistent semantic or affective interpretations. These findings empirically validate vocal quality as a critical, previously overlooked dimension for evaluating speech foundation models—highlighting both its methodological necessity and practical significance for robust, socially aware speech understanding.
📝 Abstract
Recent advances in speech foundation models (SFMs) have enabled the direct processing of spoken language from raw audio, bypassing intermediate textual representations. This capability allows SFMs to be exposed to, and potentially respond to, rich paralinguistic variations embedded in the input speech signal. One under-explored dimension of paralinguistic variation is voice quality, encompassing phonation types such as creaky and breathy voice. These phonation types are known to influence how listeners infer affective state, stance and social meaning in speech. Existing benchmarks for speech understanding largely rely on multiple-choice question answering (MCQA) formats, which are prone to failure and therefore unreliable in capturing the nuanced ways paralinguistic features influence model behaviour. In this paper, we probe SFMs through open-ended generation tasks and speech emotion recognition, evaluating whether model behaviours are consistent across different phonation inputs. We introduce a new parallel dataset featuring synthesized modifications to voice quality, designed to evaluate SFM responses to creaky and breathy voice. Our work provides the first examination of SFM sensitivity to these particular non-lexical aspects of speech perception.