MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Speech Large Language Models (SpeechLLMs) lack systematic evaluation of fine-grained speech perception and complex reasoning capabilities, necessitating a comprehensive benchmark covering multidimensional spoken language understanding. Method: We introduce MMSU—the first integrated multitask benchmark for speech understanding and reasoning—comprising 5,000 audio–question–answer triplets across 47 tasks, tightly integrating phonological, prosodic, semantic, and paralinguistic dimensions grounded in linguistic theory. Contribution/Results: MMSU pioneers the systematic incorporation of linguistic theory into speech evaluation frameworks; establishes a joint paradigm for assessing fine-grained perception and complex reasoning; and features structured annotations with hierarchical task design. Cross-model evaluation across 14 state-of-the-art SpeechLLMs reveals an average accuracy below 62%, exposing critical weaknesses in prosody modeling, affective reasoning, and speech phenomenon identification—thereby providing a reproducible, decomposable evaluation standard for speech interaction systems.

Technology Category

Application Category

📝 Abstract
Speech inherently contains rich acoustic information that extends far beyond the textual language. In real-world spoken language understanding, effective interpretation often requires integrating semantic meaning (e.g., content), paralinguistic features (e.g., emotions, speed, pitch) and phonological characteristics (e.g., prosody, intonation, rhythm), which are embedded in speech. While recent multimodal Speech Large Language Models (SpeechLLMs) have demonstrated remarkable capabilities in processing audio information, their ability to perform fine-grained perception and complex reasoning in natural speech remains largely unexplored. To address this gap, we introduce MMSU, a comprehensive benchmark designed specifically for understanding and reasoning in spoken language. MMSU comprises 5,000 meticulously curated audio-question-answer triplets across 47 distinct tasks. To ground our benchmark in linguistic theory, we systematically incorporate a wide range of linguistic phenomena, including phonetics, prosody, rhetoric, syntactics, semantics, and paralinguistics. Through a rigorous evaluation of 14 advanced SpeechLLMs, we identify substantial room for improvement in existing models, highlighting meaningful directions for future optimization. MMSU establishes a new standard for comprehensive assessment of spoken language understanding, providing valuable insights for developing more sophisticated human-AI speech interaction systems. MMSU benchmark is available at https://huggingface.co/datasets/ddwang2000/MMSU. Evaluation Code is available at https://github.com/dingdongwang/MMSU_Bench.
Problem

Research questions and friction points this paper is trying to address.

Assessing SpeechLLMs' fine-grained perception in natural speech
Integrating semantic, paralinguistic, and phonological speech features
Benchmarking spoken language understanding and reasoning capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates semantic, paralinguistic, and phonological features
Introduces MMSU benchmark for spoken language understanding
Evaluates 14 advanced SpeechLLMs for improvement insights
🔎 Similar Papers