🤖 AI Summary
Evaluating open-ended audio question answering (QA) for large audio-language models (LALMs) faces challenges including high inter-annotator disagreement, ambiguity in partial correctness, and the inadequacy of scalar scores to capture uncertainty. Method: We propose the first automated evaluation framework that explicitly models judgment uncertainty. Specifically, (1) we introduce Beta-distributed modeling of answer correctness—jointly estimating expected correctness and its uncertainty—and (2) design a three-stage human-in-the-loop annotation paradigm integrating structured human feedback with iterative refinement. Results: Evaluated on 3,580 audio QA pairs, our framework achieves Krippendorff’s alpha = 0.82 and Spearman correlation = 0.91—significantly outperforming LLM-based judges—while reducing computational overhead substantially. This work establishes a new, interpretable, robust, and low-resource paradigm for evaluating open-generative audio understanding.
📝 Abstract
Evaluating open-ended responses from large audio language models (LALMs) is challenging because human annotators often genuinely disagree on answer correctness due to multiple valid interpretations, partial correctness, and subjective judgment. Traditional metrics reporting only mean scores fail to capture this uncertainty. We present ORCA (Open-ended Response Correctness Assessment), a framework that models the variability in human judgments using Beta distributions to predict both expected correctness and uncertainty. Our three-stage annotation framework combines human judgment with structured feedback and iterative refinement to simultaneously curate training data and improve benchmark quality. We collected 11,721 annotations across 3,580 question-answer pairs from 15 LALMs on two audio QA benchmarks, achieving inter-annotator agreement of 0.82 (Krippendorff's alpha). ORCA achieves 0.91 Spearman correlation with mean human judgments, matching or outperforming LLM-judge baselines while providing uncertainty estimates and requiring significantly less compute. We release our models, code, and curated dataset.