WikiSeeker: Rethinking the Role of Vision-Language Models in Knowledge-Based Visual Question Answering

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing knowledge-based visual question answering methods, which overly rely on images as retrieval keys and underutilize the capabilities of vision-language models (VLMs). To overcome these issues, the authors propose WikiSeeker, a novel framework that decouples the VLM into two specialized agents: a Refiner that rewrites queries to enhance semantic representation and an Inspector that performs reliable context routing, thereby separating retrieval from generation. By integrating multimodal retrieval-augmented generation with collaborative reasoning from large language models, WikiSeeker significantly improves both retrieval accuracy and answer quality, achieving state-of-the-art performance on the EVQA, InfoSeek, and M2KR benchmarks.
📝 Abstract
Multi-modal Retrieval-Augmented Generation (RAG) has emerged as a highly effective paradigm for Knowledge-Based Visual Question Answering (KB-VQA). Despite recent advancements, prevailing methods still primarily depend on images as the retrieval key, and often overlook or misplace the role of Vision-Language Models (VLMs), thereby failing to leverage their potential fully. In this paper, we introduce WikiSeeker, a novel multi-modal RAG framework that bridges these gaps by proposing a multi-modal retriever and redefining the role of VLMs. Rather than serving merely as answer generators, we assign VLMs two specialized agents: a Refiner and an Inspector. The Refiner utilizes the capability of VLMs to rewrite the textual query according to the input image, significantly improving the performance of the multimodal retriever. The Inspector facilitates a decoupled generation strategy by selectively routing reliable retrieved context to another LLM for answer generation, while relying on the VLM's internal knowledge when retrieval is unreliable. Extensive experiments on EVQA, InfoSeek, and M2KR demonstrate that WikiSeeker achieves state-of-the-art performance, with substantial improvements in both retrieval accuracy and answer quality. Our code will be released on https://github.com/zhuyjan/WikiSeeker.
Problem

Research questions and friction points this paper is trying to address.

Knowledge-Based Visual Question Answering
Vision-Language Models
Multi-modal Retrieval-Augmented Generation
Retrieval Key
VLM Role
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language Models
Retrieval-Augmented Generation
Multi-modal Retrieval
Query Refinement
Decoupled Generation
🔎 Similar Papers
No similar papers found.
Y
Yingjian Zhu
School of Artificial Intelligence, University of Chinese Academy of Sciences; State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences
X
Xinming Wang
School of Artificial Intelligence, University of Chinese Academy of Sciences; State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences
Kun Ding
Kun Ding
CASIA
CVMultimodal
Ying Wang
Ying Wang
Institute of Computing Technology, Chinese Academy of Sciences
Reliable Computer ArchitectureVLSI designMachine learningMemory system
Bin Fan
Bin Fan
University of Science and Technology Beijing, previously at NLPR, CASIA
Computer VisionDeep LearningImage Processing
Shiming Xiang
Shiming Xiang
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
Distance Metric LearningSemi-supervised LearningManifold LearningRegressionFeature Selection