🤖 AI Summary
Current image captioning models trained on MS-COCO tend to generate overly concise, generic descriptions lacking fine-grained details, falling short of human-level performance. To address this, we propose a training-free, multi-model caption fusion framework: (1) ensemble multiple state-of-the-art captioning models to generate initial candidates; (2) rank them using our novel BLIPScore—a zero-shot, vision-language alignment metric based on BLIP-2; and (3) prompt a large language model (LLM) to fuse the top-two ranked captions. Our key contributions are the first introduction of BLIPScore for caption evaluation and a zero-shot, multi-source caption fusion paradigm that enhances descriptive richness while mitigating hallucination. Experiments demonstrate consistent superiority over strong baselines on both COCO and Flickr30k benchmarks. Notably, our method achieves significant gains across fine-grained automatic metrics—including ALOHa, CAPTURE, and Polos—as well as in human evaluations of semantic consistency and descriptive fidelity.
📝 Abstract
State-of-The-Art (SoTA) image captioning models are often trained on the MicroSoft Common Objects in Context (MS-COCO) dataset, which contains human-annotated captions with an average length of approximately ten tokens. Although effective for general scene understanding, these short captions often fail to capture complex scenes and convey detailed information. Moreover, captioning models tend to exhibit bias towards the ``average''caption, which captures only the more general aspects, thus overlooking finer details. In this paper, we present a novel approach to generate richer and more informative image captions by combining the captions generated from different SoTA captioning models. Our proposed method requires no additional model training: given an image, it leverages pre-trained models from the literature to generate the initial captions, and then ranks them using a newly introduced image-text-based metric, which we name BLIPScore. Subsequently, the top two captions are fused using a Large Language Model (LLM) to produce the final, more detailed description. Experimental results on the MS-COCO and Flickr30k test sets demonstrate the effectiveness of our approach in terms of caption-image alignment and hallucination reduction according to the ALOHa, CAPTURE, and Polos metrics. A subjective study lends additional support to these results, suggesting that the captions produced by our model are generally perceived as more consistent with human judgment. By combining the strengths of diverse SoTA models, our method enhances the quality and appeal of image captions, bridging the gap between automated systems and the rich and informative nature of human-generated descriptions. This advance enables the generation of more suitable captions for the training of both vision-language and captioning models.