Surveying the Landscape of Image Captioning Evaluation: A Comprehensive Taxonomy, Trends and Metrics Analysis

📅 2024-08-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the weak correlation between automated image captioning evaluation metrics and human judgments. We systematically survey over 70 existing metrics and propose, for the first time, a comprehensive, hierarchically structured taxonomy. Empirical analysis reveals that five widely adopted metrics—including BLEU and METEOR—exhibit consistently low Spearman correlations (<0.3) with human ratings across diverse benchmarks. To overcome this limitation, we introduce EnsembEval, a linear regression-based ensemble framework that fuses multiple metrics. Trained on a single dataset, EnsembEval achieves statistically significant improvements in both Spearman and Pearson correlations (average gain >0.15) across five out-of-domain test sets, demonstrating strong generalization. Our work contributes both an interpretable, principled taxonomy for caption evaluation metrics and a reusable, effective ensemble methodology—advancing the reliability and applicability of automatic image caption assessment.

Technology Category

Application Category

📝 Abstract
The task of image captioning has recently been gaining popularity, and with it the complex task of evaluating the quality of image captioning models. In this work, we present the first survey and taxonomy of over 70 different image captioning metrics and their usage in hundreds of papers, specifically designed to help users select the most suitable metric for their needs. We find that despite the diversity of proposed metrics, the vast majority of studies rely on only five popular metrics, which we show to be weakly correlated with human ratings. We hypothesize that combining a diverse set of metrics can enhance correlation with human ratings. As an initial step, we demonstrate that a linear regression-based ensemble method, which we call EnsembEval, trained on one human ratings dataset, achieves improved correlation across five additional datasets, showing there is a lot of room for improvement by leveraging a diverse set of metrics.
Problem

Research questions and friction points this paper is trying to address.

Evaluating image captioning models using diverse metrics
Analyzing correlation between metrics and human ratings
Proposing EnsembEval for improved evaluation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Survey and taxonomy of 70+ image captioning metrics
Linear regression-based ensemble method EnsembEval
Improved correlation with human ratings using diverse metrics
🔎 Similar Papers
No similar papers found.