AHELM: A Holistic Evaluation of Audio-Language Models

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-language model (ALM) evaluation benchmarks suffer from fragmentation: they address narrow dimensions, lack fairness and safety considerations, and employ inconsistent prompting strategies and decoding parameters—hindering fair cross-model comparison. To address this, we introduce AHELM, the first comprehensive benchmark covering ten core capabilities—including perception, reasoning, emotion recognition, fairness, and safety. We innovatively construct two synthetic datasets: PARADE for stereotypical bias detection and CoRe-Bench for multi-turn conversational reasoning. AHELM unifies diverse public data sources under standardized prompting templates, decoding configurations, and evaluation metrics. We further establish a reproducible ASR+LM baseline. Evaluating 14 state-of-the-art ALMs and 3 baselines reveals that Gemini 2.5 Pro leads in five tasks yet exhibits significant group-level unfairness; notably, several pipeline baselines outperform end-to-end ALMs, exposing misaligned optimization priorities in current ALM development.

Technology Category

Application Category

📝 Abstract
Evaluations of audio-language models (ALMs) -- multimodal models that take interleaved audio and text as input and output text -- are hindered by the lack of standardized benchmarks; most benchmarks measure only one or two capabilities and omit evaluative aspects such as fairness or safety. Furthermore, comparison across models is difficult as separate evaluations test a limited number of models and use different prompting methods and inference parameters. To address these shortfalls, we introduce AHELM, a benchmark that aggregates various datasets -- including 2 new synthetic audio-text datasets called PARADE, which evaluates the ALMs on avoiding stereotypes, and CoRe-Bench, which measures reasoning over conversational audio through inferential multi-turn question answering -- to holistically measure the performance of ALMs across 10 aspects we have identified as important to the development and usage of ALMs: audio perception, knowledge, reasoning, emotion detection, bias, fairness, multilinguality, robustness, toxicity, and safety. We also standardize the prompts, inference parameters, and evaluation metrics to ensure equitable comparisons across models. We test 14 open-weight and closed-API ALMs from 3 developers and 3 additional simple baseline systems each consisting of an automatic speech recognizer and a language model. Our results show that while Gemini 2.5 Pro ranks top in 5 out of 10 aspects, it exhibits group unfairness ($p=0.01$) on ASR tasks whereas most of the other models do not. We also find that the baseline systems perform reasonably well on AHELM, with one ranking 5th overall despite having only speech-to-text capabilities. For transparency, all raw prompts, model generations, and outputs are available on our website at https://crfm.stanford.edu/helm/audio/v1.0.0. AHELM is intended to be a living benchmark and new datasets and models will be added over time.
Problem

Research questions and friction points this paper is trying to address.

Lack of standardized benchmarks for audio-language models
Inconsistent evaluation methods across different ALM studies
Need holistic assessment covering fairness, safety, and capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces AHELM benchmark for holistic ALM evaluation
Standardizes prompts and parameters for equitable comparisons
Uses synthetic datasets to assess fairness and reasoning
🔎 Similar Papers
No similar papers found.