🤖 AI Summary
Existing image captioning models improve accuracy but struggle to generate discriminative captions that distinguish visually similar images. Method: To address insufficient caption distinctiveness, we propose a group-wise discrepancy-driven captioning framework. It introduces a Group-based Discrepancy Memory Attention (GDMA) module to explicitly model object-level uniqueness among images within the same group; defines a novel metric, DisWordRate, to quantify caption discriminability; and jointly optimizes salient-word-guided decoding with attention mechanisms. The framework is trained end-to-end via contrastive learning and intra-group relational modeling. Contribution/Results: Our approach achieves state-of-the-art distinctiveness performance across multiple benchmarks, with only marginal accuracy degradation. Results are rigorously validated through both quantitative evaluation and user studies, confirming substantial improvements in generating semantically discriminative captions.
📝 Abstract
Recent advances in image captioning have focused on enhancing accuracy by substantially increasing the dataset and model size. While conventional captioning models exhibit high performance on established metrics such as BLEU, CIDEr, and SPICE, the capability of captions to distinguish the target image from other similar images is under-explored. To generate distinctive captions, a few pioneers employed contrastive learning or re-weighted the ground-truth captions. However, these approaches often overlook the relationships among objects in a similar image group (e.g., items or properties within the same album or fine-grained events). In this paper, we introduce a novel approach to enhance the distinctiveness of image captions, namely Group-based Differential Distinctive Captioning Method, which visually compares each image with other images in one similar group and highlights the uniqueness of each image. In particular, we introduce a Group-based Differential Memory Attention (GDMA) module, designed to identify and emphasize object features in an image that are uniquely distinguishable within its image group, i.e., those exhibiting low similarity with objects in other images. This mechanism ensures that such unique object features are prioritized during caption generation for the image, thereby enhancing the distinctiveness of the resulting captions. To further refine this process, we select distinctive words from the ground-truth captions to guide both the language decoder and the GDMA module. Additionally, we propose a new evaluation metric, the Distinctive Word Rate (DisWordRate), to quantitatively assess caption distinctiveness. Quantitative results indicate that the proposed method significantly improves the distinctiveness of several baseline models, and achieves state-of-the-art performance on distinctiveness while not excessively sacrificing accuracy. Moreover, the results of our user study are consistent with the quantitative evaluation and demonstrate the rationality of the new metric DisWordRate.