🤖 AI Summary
This work addresses the limitations of existing knowledge-based visual question answering methods, which overly rely on images as retrieval keys and underutilize the capabilities of vision-language models (VLMs). To overcome these issues, the authors propose WikiSeeker, a novel framework that decouples the VLM into two specialized agents: a Refiner that rewrites queries to enhance semantic representation and an Inspector that performs reliable context routing, thereby separating retrieval from generation. By integrating multimodal retrieval-augmented generation with collaborative reasoning from large language models, WikiSeeker significantly improves both retrieval accuracy and answer quality, achieving state-of-the-art performance on the EVQA, InfoSeek, and M2KR benchmarks.
📝 Abstract
Multi-modal Retrieval-Augmented Generation (RAG) has emerged as a highly effective paradigm for Knowledge-Based Visual Question Answering (KB-VQA). Despite recent advancements, prevailing methods still primarily depend on images as the retrieval key, and often overlook or misplace the role of Vision-Language Models (VLMs), thereby failing to leverage their potential fully. In this paper, we introduce WikiSeeker, a novel multi-modal RAG framework that bridges these gaps by proposing a multi-modal retriever and redefining the role of VLMs. Rather than serving merely as answer generators, we assign VLMs two specialized agents: a Refiner and an Inspector. The Refiner utilizes the capability of VLMs to rewrite the textual query according to the input image, significantly improving the performance of the multimodal retriever. The Inspector facilitates a decoupled generation strategy by selectively routing reliable retrieved context to another LLM for answer generation, while relying on the VLM's internal knowledge when retrieval is unreliable. Extensive experiments on EVQA, InfoSeek, and M2KR demonstrate that WikiSeeker achieves state-of-the-art performance, with substantial improvements in both retrieval accuracy and answer quality. Our code will be released on https://github.com/zhuyjan/WikiSeeker.