CLAIR-A: Leveraging Large Language Models to Judge Audio Captions

๐Ÿ“… 2024-09-19
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 3
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing audio captioning (AAC) evaluation methods predominantly focus on isolated dimensions, failing to holistically capture auditory scene understanding, sound object inference, temporal coherence, and environmental contextโ€”resulting in low correlation with human judgments. To address this, we propose the first zero-shot, end-to-end evaluation framework powered by large language models (LLMs), directly leveraging models such as GPT and Llama as interpretable evaluators: given reference and generated captions, the framework outputs a semantic distance score alongside Chain-of-Thought reasoning for transparency. This approach overcomes the inherent limitations of conventional metrics, markedly improving both evaluation interpretability and alignment with human assessments. On Clotho-Eval, our method achieves a 5.8% accuracy gain over FENSE and an 11% improvement over the best-performing general-purpose metric. Human evaluation further confirms a 30% increase in explanation quality compared to prior approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
The Automated Audio Captioning (AAC) task asks models to generate natural language descriptions of an audio input. Evaluating these machine-generated audio captions is a complex task that requires considering diverse factors, among them, auditory scene understanding, sound-object inference, temporal coherence, and the environmental context of the scene. While current methods focus on specific aspects, they often fail to provide an overall score that aligns well with human judgment. In this work, we propose CLAIR-A, a simple and flexible method that leverages the zero-shot capabilities of large language models (LLMs) to evaluate candidate audio captions by directly asking LLMs for a semantic distance score. In our evaluations, CLAIR-A better predicts human judgements of quality compared to traditional metrics, with a 5.8% relative accuracy improvement compared to the domain-specific FENSE metric and up to 11% over the best general-purpose measure on the Clotho-Eval dataset. Moreover, CLAIR-A offers more transparency by allowing the language model to explain the reasoning behind its scores, with these explanations rated up to 30% better by human evaluators than those provided by baseline methods. CLAIR-A is made publicly available at https://github.com/DavidMChan/clair-a.
Problem

Research questions and friction points this paper is trying to address.

Evaluating machine-generated audio captions lacks comprehensive human-aligned metrics
Current methods fail to provide overall scores matching human judgment
Need for transparent evaluation with reasoning behind caption quality scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs for audio caption evaluation
Uses zero-shot semantic distance scoring
Provides transparent scoring explanations
๐Ÿ”Ž Similar Papers
No similar papers found.
Tsung-Han Wu
Tsung-Han Wu
PhD Student, UC Berkeley
Vision and LanguageComputer VisionActive Learning
J
Joseph Gonzalez
Department of Electrical Engineering and Computer Science (EECS), University of California, Berkeley
Trevor Darrell
Trevor Darrell
Professor of Computer Science, U.C. Berkeley
Computer VisionArtificial IntelligenceAIMachine LearningDeep Learning
D
David M. Chan
Department of Electrical Engineering and Computer Science (EECS), University of California, Berkeley