🤖 AI Summary
Existing Speech Large Language Models (SpeechLLMs) lack systematic evaluation of fine-grained speech perception and complex reasoning capabilities, necessitating a comprehensive benchmark covering multidimensional spoken language understanding. Method: We introduce MMSU—the first integrated multitask benchmark for speech understanding and reasoning—comprising 5,000 audio–question–answer triplets across 47 tasks, tightly integrating phonological, prosodic, semantic, and paralinguistic dimensions grounded in linguistic theory. Contribution/Results: MMSU pioneers the systematic incorporation of linguistic theory into speech evaluation frameworks; establishes a joint paradigm for assessing fine-grained perception and complex reasoning; and features structured annotations with hierarchical task design. Cross-model evaluation across 14 state-of-the-art SpeechLLMs reveals an average accuracy below 62%, exposing critical weaknesses in prosody modeling, affective reasoning, and speech phenomenon identification—thereby providing a reproducible, decomposable evaluation standard for speech interaction systems.
📝 Abstract
Speech inherently contains rich acoustic information that extends far beyond the textual language. In real-world spoken language understanding, effective interpretation often requires integrating semantic meaning (e.g., content), paralinguistic features (e.g., emotions, speed, pitch) and phonological characteristics (e.g., prosody, intonation, rhythm), which are embedded in speech. While recent multimodal Speech Large Language Models (SpeechLLMs) have demonstrated remarkable capabilities in processing audio information, their ability to perform fine-grained perception and complex reasoning in natural speech remains largely unexplored. To address this gap, we introduce MMSU, a comprehensive benchmark designed specifically for understanding and reasoning in spoken language. MMSU comprises 5,000 meticulously curated audio-question-answer triplets across 47 distinct tasks. To ground our benchmark in linguistic theory, we systematically incorporate a wide range of linguistic phenomena, including phonetics, prosody, rhetoric, syntactics, semantics, and paralinguistics. Through a rigorous evaluation of 14 advanced SpeechLLMs, we identify substantial room for improvement in existing models, highlighting meaningful directions for future optimization. MMSU establishes a new standard for comprehensive assessment of spoken language understanding, providing valuable insights for developing more sophisticated human-AI speech interaction systems. MMSU benchmark is available at https://huggingface.co/datasets/ddwang2000/MMSU. Evaluation Code is available at https://github.com/dingdongwang/MMSU_Bench.