CONCAP: Seeing Beyond English with Concepts Retrieval-Augmented Captioning

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multilingual vision-language models significantly underperform their English counterparts in image captioning, primarily due to scarcity of multilingual training data, high parameterization costs for large models, and translation-induced biases and semantic mismatches in existing retrieval-augmented generation (RAG) approaches—especially those relying on English-translated retrieval texts. To address these challenges, we propose CONCAP, the first multilingual image captioning model that integrates image-specific concept extraction with source-language-aware concept fusion within a RAG framework. By jointly modeling vision–language alignment, cross-lingual concept retrieval, and source-language-guided generation, CONCAP mitigates translation bias and reduces dependence on multilingual annotated data. Evaluated on the XM3600 benchmark, CONCAP achieves substantial improvements in BLEU-4 and SPICE scores for low- and medium-resource languages, markedly narrowing the performance gap with English.

Technology Category

Application Category

📝 Abstract
Multilingual vision-language models have made significant strides in image captioning, yet they still lag behind their English counterparts due to limited multilingual training data and costly large-scale model parameterization. Retrieval-augmented generation (RAG) offers a promising alternative by conditioning caption generation on retrieved examples in the target language, reducing the need for extensive multilingual training. However, multilingual RAG captioning models often depend on retrieved captions translated from English, which can introduce mismatches and linguistic biases relative to the source language. We introduce CONCAP, a multilingual image captioning model that integrates retrieved captions with image-specific concepts, enhancing the contextualization of the input image and grounding the captioning process across different languages. Experiments on the XM3600 dataset indicate that CONCAP enables strong performance on low- and mid-resource languages, with highly reduced data requirements. Our findings highlight the effectiveness of concept-aware retrieval augmentation in bridging multilingual performance gaps.
Problem

Research questions and friction points this paper is trying to address.

Multilingual captioning lags due to limited training data
Retrieved translated captions introduce linguistic biases
Concept-aware retrieval bridges multilingual performance gaps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses concept retrieval-augmented captioning for multilingual support
Reduces multilingual data needs with image-specific concepts
Improves performance on low-resource languages effectively
🔎 Similar Papers
No similar papers found.
G
George Ibrahim
Department of Natural Language Processing, MBZUAI
R
Rita Ramos
INESC-ID, Instituto Superior Técnico, University of Lisbon
Yova Kementchedjhieva
Yova Kementchedjhieva
Assistant Professor, MBZUAI
Natural Language Processing