Multimodal RAG Enhanced Visual Description

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the modality gap arising from misaligned image and text embedding spaces in pretrained large multimodal models (LMMs), this paper proposes a fine-tuning-free, lightweight, and efficient retrieval-augmented generation (RAG) framework. Our method employs a learnable linear projection to align visual and textual embeddings and dynamically retrieves relevant textual descriptions via cross-modal retrieval to serve as contextual input. Furthermore, we introduce an iterative distillation mechanism that automatically generates high-quality synthetic captions to refine the alignment mapping. Experiments on two standard multimodal benchmarks demonstrate substantial improvements in caption accuracy and semantic relevance. Crucially, our approach eliminates reliance on large-scale annotated data—unlike conventional fine-tuning—and achieves effective cross-modal alignment under low-resource conditions, establishing a novel paradigm for efficient multimodal representation alignment.

Technology Category

Application Category

📝 Abstract
Textual descriptions for multimodal inputs entail recurrent refinement of queries to produce relevant output images. Despite efforts to address challenges such as scaling model size and data volume, the cost associated with pre-training and fine-tuning remains substantial. However, pre-trained large multimodal models (LMMs) encounter a modality gap, characterised by a misalignment between textual and visual representations within a common embedding space. Although fine-tuning can potentially mitigate this gap, it is typically expensive and impractical due to the requirement for extensive domain-driven data. To overcome this challenge, we propose a lightweight training-free approach utilising Retrieval-Augmented Generation (RAG) to extend across the modality using a linear mapping, which can be computed efficiently. During inference, this mapping is applied to images embedded by an LMM enabling retrieval of closest textual descriptions from the training set. These textual descriptions, in conjunction with an instruction, cater as an input prompt for the language model to generate new textual descriptions. In addition, we introduce an iterative technique for distilling the mapping by generating synthetic descriptions via the language model facilitating optimisation for standard utilised image description measures. Experimental results on two benchmark multimodal datasets demonstrate significant improvements.
Problem

Research questions and friction points this paper is trying to address.

Address modality gap in multimodal models
Reduce cost of pre-training and fine-tuning
Improve textual description relevance for images
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight training-free RAG approach
Linear mapping bridges modality gap
Iterative technique optimizes image descriptions
🔎 Similar Papers
No similar papers found.