🤖 AI Summary
To address the limited output quality of Large Vision-Language Models (LVLMs) in multimodal understanding and generation, this paper proposes UniRAG—a plug-and-play retrieval-augmented reasoning framework. At its core, UniRAG leverages a vision-language retriever (e.g., UniIR) to dynamically retrieve relevant multimodal examples during inference and inject them as few-shot demonstrations into the prompt—requiring neither model fine-tuning nor architectural modification. This work presents the first systematic empirical validation that retrieval augmentation significantly improves LVLM performance in common-entity scenarios, demonstrating strong cross-model generalizability. On the MSCOCO benchmark, UniRAG consistently enhances image captioning quality across diverse state-of-the-art LVLMs, including GPT-4o, Gemini-Pro, LLaVA, LaVIT, and Emu2. The implementation is publicly available.
📝 Abstract
Recently, Large Vision Language Models (LVLMs) have unlocked many complex use cases that require Multi-Modal (MM) understanding (e.g., image captioning or visual question answering) and MM generation (e.g., text-guided image generation or editing) capabilities. To further improve the output fidelityof LVLMs we introduce UniRAG, a plug-and-play technique that adds relevant retrieved information to prompts as few-shot examples during inference. Unlike the common belief that Retrieval Augmentation (RA) mainly improves generation or understanding of uncommon entities, our evaluation results on the MSCOCO dataset with common entities show that both proprietary models like GPT-4o and Gemini-Pro and smaller open-source models like LLaVA, LaVIT, and Emu2 significantly enhance their generation quality when their input prompts are augmented with relevant information retrieved by Vision-Language (VL) retrievers like UniIR models. All the necessary code to reproduce our results is available at https://github.com/castorini/UniRAG