Robustness assessment of large audio language models in multiple-choice evaluation

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large audio-language models (LALMs) exhibit severe robustness deficiencies in multiple-choice evaluation: mainstream benchmarks fail to control confounding variables—such as option ordering, question/option paraphrasing, and semantic rewording—leading to inflated and unreliable accuracy estimates. Method: We propose a lightweight, systematic robustness evaluation protocol comprising three perturbation types—option shuffling, question rephrasing, and option paraphrasing—along with corresponding stability metrics. Contribution/Results: Evaluated on three major multimodal audio benchmarks (MMAU, MMAR, MMSU), our protocol exposes significant performance degradation across four state-of-the-art LALMs—Audio Flamingo 2/3, Qwen2.5-Omni-7B-Instruct, and Kimi-Audio-7B-Instruct—with average accuracy drops of 12.6% under perturbations. This work is the first to empirically reveal the fragility of LALMs in multiple-choice reasoning, establishing a methodological foundation and empirical evidence for trustworthy, reproducible audio-language evaluation standards.

Technology Category

Application Category

📝 Abstract
Recent advances in large audio language models (LALMs) have primarily been assessed using a multiple-choice question answering (MCQA) framework. However, subtle changes, such as shifting the order of choices, result in substantially different results. Existing MCQA frameworks do not account for this variability and report a single accuracy number per benchmark or category. We dive into the MCQA evaluation framework and conduct a systematic study spanning three benchmarks (MMAU, MMAR and MMSU) and four models: Audio Flamingo 2, Audio Flamingo 3, Qwen2.5-Omni-7B-Instruct, and Kimi-Audio-7B-Instruct. Our findings indicate that models are sensitive not only to the ordering of choices, but also to the paraphrasing of the question and the choices. Finally, we propose a simpler evaluation protocol and metric that account for subtle variations and provide a more detailed evaluation report of LALMs within the MCQA framework.
Problem

Research questions and friction points this paper is trying to address.

Evaluating robustness of audio language models to choice ordering changes
Assessing model sensitivity to question and choice paraphrasing variations
Proposing improved evaluation protocol for multiple-choice audio benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed robust evaluation protocol for LALMs
Introduced metric accounting for subtle variations
Systematically assessed choice ordering sensitivity
🔎 Similar Papers
No similar papers found.