đ¤ AI Summary
Evaluating long, fine-grained image descriptions generated by multimodal large language models (MLLMs) remains challenging, as conventional metrics (e.g., BLEU, CIDEr) exhibit significant degradation in human correlation, ranking accuracy, and sensitivity to hallucinations.
Method: We conduct the first comprehensive empirical assessment of mainstream metricsâ adaptability to MLLM-specific phenomenaâoutput style drift and semantic hallucinationâusing a multidimensional framework: human evaluation benchmarks, statistical correlation analysis, adversarial perturbation testing, and cross-model output comparison.
Contribution/Results: We identify a substantial drop in metricâhuman judgment correlation; propose a novel, robust three-dimensional evaluation standardâfaithfulness, richness, and discriminabilityâto better capture MLLM output quality; and outline a principled evolution path for evaluation paradigms in the MLLM era. Our findings expose fundamental limitations of existing metrics and provide actionable guidance for developing more reliable, human-aligned assessment methodologies.
đ Abstract
The evaluation of machine-generated image captions is a complex and evolving challenge. With the advent of Multimodal Large Language Models (MLLMs), image captioning has become a core task, increasing the need for robust and reliable evaluation metrics. This survey provides a comprehensive overview of advancements in image captioning evaluation, analyzing the evolution, strengths, and limitations of existing metrics. We assess these metrics across multiple dimensions, including correlation with human judgment, ranking accuracy, and sensitivity to hallucinations. Additionally, we explore the challenges posed by the longer and more detailed captions generated by MLLMs and examine the adaptability of current metrics to these stylistic variations. Our analysis highlights some limitations of standard evaluation approaches and suggests promising directions for future research in image captioning assessment.