Improving Image Captioning Descriptiveness by Ranking and LLM-based Fusion

📅 2023-06-20
🏛️ arXiv.org
📈 Citations: 24
Influential: 0
📄 PDF
🤖 AI Summary
Current image captioning models trained on MS-COCO tend to generate overly concise, generic descriptions lacking fine-grained details, falling short of human-level performance. To address this, we propose a training-free, multi-model caption fusion framework: (1) ensemble multiple state-of-the-art captioning models to generate initial candidates; (2) rank them using our novel BLIPScore—a zero-shot, vision-language alignment metric based on BLIP-2; and (3) prompt a large language model (LLM) to fuse the top-two ranked captions. Our key contributions are the first introduction of BLIPScore for caption evaluation and a zero-shot, multi-source caption fusion paradigm that enhances descriptive richness while mitigating hallucination. Experiments demonstrate consistent superiority over strong baselines on both COCO and Flickr30k benchmarks. Notably, our method achieves significant gains across fine-grained automatic metrics—including ALOHa, CAPTURE, and Polos—as well as in human evaluations of semantic consistency and descriptive fidelity.
📝 Abstract
State-of-The-Art (SoTA) image captioning models are often trained on the MicroSoft Common Objects in Context (MS-COCO) dataset, which contains human-annotated captions with an average length of approximately ten tokens. Although effective for general scene understanding, these short captions often fail to capture complex scenes and convey detailed information. Moreover, captioning models tend to exhibit bias towards the ``average''caption, which captures only the more general aspects, thus overlooking finer details. In this paper, we present a novel approach to generate richer and more informative image captions by combining the captions generated from different SoTA captioning models. Our proposed method requires no additional model training: given an image, it leverages pre-trained models from the literature to generate the initial captions, and then ranks them using a newly introduced image-text-based metric, which we name BLIPScore. Subsequently, the top two captions are fused using a Large Language Model (LLM) to produce the final, more detailed description. Experimental results on the MS-COCO and Flickr30k test sets demonstrate the effectiveness of our approach in terms of caption-image alignment and hallucination reduction according to the ALOHa, CAPTURE, and Polos metrics. A subjective study lends additional support to these results, suggesting that the captions produced by our model are generally perceived as more consistent with human judgment. By combining the strengths of diverse SoTA models, our method enhances the quality and appeal of image captions, bridging the gap between automated systems and the rich and informative nature of human-generated descriptions. This advance enables the generation of more suitable captions for the training of both vision-language and captioning models.
Problem

Research questions and friction points this paper is trying to address.

Short captions from SoTA models fail to capture complex scene details
Captioning models exhibit bias towards average descriptions overlooking finer details
Existing methods struggle with rich informative descriptions like human annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining captions from multiple SoTA models
Ranking captions using new BLIPScore metric
Fusing top captions with an LLM
🔎 Similar Papers
No similar papers found.
S
Simone Bianco
Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milano, IT
Luigi Celona
Luigi Celona
Assistant Professor, University of Milano-Bicocca
Visual quality assessmentComputer visionDeep learningMachine learning
M
Marco Donzella
Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milano, IT
Paolo Napoletano
Paolo Napoletano
Associate Professor, University of Milano-Bicocca
Intelligent SensingComputer VisionPattern RecognitionDeep LearningArtificial Intelligence