MMAU-Pro: A Challenging and Comprehensive Benchmark for Holistic Evaluation of Audio General Intelligence

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
A comprehensive, rigorous benchmark for evaluating audio intelligence—spanning speech, non-speech sounds, and music understanding—is currently lacking, hindering systematic assessment of AI’s holistic auditory capabilities. Method: We introduce MMAU-Pro, the first end-to-end audio intelligence benchmark supporting multi-hop reasoning, real-world audio inputs, and cross-modal integrated evaluation. It encompasses 49 fine-grained competencies, incorporating novel dimensions such as long-duration audio understanding, spatial auditory modeling, and multi-audio collaborative analysis. Its high-difficulty test set is constructed from in-the-wild audio recordings and expert-annotated multiple-choice and open-ended question-answer pairs. Results: State-of-the-art models—including Gemini 2.5 Flash and Audio Flamingo 3—achieve only 59.2% and 51.7% average accuracy, respectively; performance on several task categories approaches chance level, revealing a critical bottleneck in human-like auditory comprehension.

Technology Category

Application Category

📝 Abstract
Audio comprehension-including speech, non-speech sounds, and music-is essential for achieving human-level intelligence. Consequently, AI agents must demonstrate holistic audio understanding to qualify as generally intelligent. However, evaluating auditory intelligence comprehensively remains challenging. To address this gap, we introduce MMAU-Pro, the most comprehensive and rigorously curated benchmark for assessing audio intelligence in AI systems. MMAU-Pro contains 5,305 instances, where each instance has one or more audios paired with human expert-generated question-answer pairs, spanning speech, sound, music, and their combinations. Unlike existing benchmarks, MMAU-Pro evaluates auditory intelligence across 49 unique skills and multiple complex dimensions, including long-form audio comprehension, spatial audio reasoning, multi-audio understanding, among others. All questions are meticulously designed to require deliberate multi-hop reasoning, including both multiple-choice and open-ended response formats. Importantly, audio data is sourced directly ``from the wild" rather than from existing datasets with known distributions. We evaluate 22 leading open-source and proprietary multimodal AI models, revealing significant limitations: even state-of-the-art models such as Gemini 2.5 Flash and Audio Flamingo 3 achieve only 59.2% and 51.7% accuracy, respectively, approaching random performance in multiple categories. Our extensive analysis highlights specific shortcomings and provides novel insights, offering actionable perspectives for the community to enhance future AI systems' progression toward audio general intelligence. The benchmark and code is available at https://sonalkum.github.io/mmau-pro.
Problem

Research questions and friction points this paper is trying to address.

Evaluating holistic audio intelligence in AI systems comprehensively
Assessing multi-hop reasoning across speech, sound, and music
Benchmarking model performance on real-world audio understanding tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MMAU-Pro benchmark with 5,305 expert-curated audio instances
Evaluates 49 unique audio skills across multiple complex dimensions
Uses wild-sourced audio data requiring multi-hop reasoning
🔎 Similar Papers
No similar papers found.
S
Sonal Kumar
University of Maryland, College Park, USA
Š
Šimon Sedláček
Brno University of Technology, Czech Republic
V
Vaibhavi Lokegaonkar
University of Maryland, College Park, USA
Fernando López
Fernando López
INO
Infrared ImagingNDESignal ProcessingTerahertz ImagingThermal Sciences
W
Wenyi Yu
Tsinghua University
Nishit Anand
Nishit Anand
MS CS at University of Maryland, College Park
Machine LearningComputer VisionNatural Language ProcessingSpeech Recognition
Hyeonggon Ryu
Hyeonggon Ryu
KAIST
Computer VisionAudio-Visual Learning
Lichang Chen
Lichang Chen
University of Maryland
AI AlignmentOmni-ModalityReasoning
M
Maxim Plička
Brno University of Technology, Czech Republic
M
Miroslav Hlaváček
Phonexia
W
William Fineas Ellingwood
Middlebury College, USA
S
Sathvik Udupa
Brno University of Technology, Czech Republic
S
Siyuan Hou
Tsinghua University
A
Allison Ferner
Tufts University
S
Sara Barahona
Universidad Autónoma de Madrid
C
Cecilia Bolaños
Universidad de Buenos Aires
S
Satish Rahi
Indian Institute of Technology, Bombay
L
Laura Herrera-Alarcón
Universidad Autónoma de Madrid
Satvik Dixit
Satvik Dixit
Carnegie Mellon University
Speech and AudioLarge Language Models
S
Siddhi Patil
University of Maryland, College Park, USA
Soham Deshmukh
Soham Deshmukh
Microsoft, Carnegie Mellon University
Audio machine learningAudio processingSpeech processing
L
Lasha Koroshinadze
University of Maryland, College Park, USA
Y
Yao Liu
Universiti Sains Malaysia
L
Leibny Paola Garcia Perera
Johns Hopkins University, USA
E
Eleni Zanou
Athens University of Economics and Business