UniRVQA: A Unified Framework for Retrieval-Augmented Vision Question Answering via Self-Reflective Joint Training

📅 2025-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing KB-VQA systems commonly adopt a decoupled retrieval-generation architecture, leading to insufficient parameter sharing and limited capability for fine-grained multimodal knowledge retrieval. Method: This paper proposes an end-to-end unified framework featuring a novel self-reflective joint training mechanism and a cross-task parameter-sharing architecture. It integrates late interaction, reflective knowledge boundary assessment, and self-reflective answer verification—all built upon a general-purpose multimodal pretrained foundation model, avoiding costly task-specific pretraining. Contribution/Results: The approach significantly enhances knowledge-intensive reasoning while preserving computational efficiency. Experiments demonstrate a 4.7% absolute accuracy gain on KB-VQA benchmarks and an average 7.5% improvement over base multimodal LMs on standard VQA tasks. It effectively bridges the retrieval-generation gap and strengthens multimodal knowledge alignment and verification capabilities.

Technology Category

Application Category

📝 Abstract
Knowledge-based Vision Question Answering (KB-VQA) systems address complex visual-grounded questions requiring external knowledge, such as web-sourced encyclopedia articles. Existing methods often use sequential and separate frameworks for the retriever and the generator with limited parametric knowledge sharing. However, since both retrieval and generation tasks require accurate understanding of contextual and external information, such separation can potentially lead to suboptimal system performance. Another key challenge is the integration of multimodal information. General-purpose multimodal pre-trained models, while adept at multimodal representation learning, struggle with fine-grained retrieval required for knowledge-intensive visual questions. Recent specialized pre-trained models mitigate the issue, but are computationally expensive. To bridge the gap, we propose a Unified Retrieval-Augmented VQA framework (UniRVQA). UniRVQA adapts general multimodal pre-trained models for fine-grained knowledge-intensive tasks within a unified framework, enabling cross-task parametric knowledge sharing and the extension of existing multimodal representation learning capability. We further introduce a reflective-answering mechanism that allows the model to explicitly evaluate and refine its knowledge boundary. Additionally, we integrate late interaction into the retrieval-augmented generation joint training process to enhance fine-grained understanding of queries and documents. Our approach achieves competitive performance against state-of-the-art models, delivering a significant 4.7% improvement in answering accuracy, and brings an average 7.5% boost in base MLLMs' VQA performance.
Problem

Research questions and friction points this paper is trying to address.

Unifies retrieval and generation for KB-VQA to improve performance
Enhances fine-grained multimodal retrieval in knowledge-intensive VQA tasks
Introduces self-reflective mechanism to refine knowledge boundaries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for retrieval-augmented VQA tasks
Reflective-answering mechanism for knowledge refinement
Late interaction enhances fine-grained query understanding
🔎 Similar Papers
No similar papers found.
Jiaqi Deng
Jiaqi Deng
The University of Hong Kong
Deep LearningNatural Language Processing
K
Kaize Shi
The University of Technology Sydney, Sydney, New South Wales, Australia
Zonghan Wu
Zonghan Wu
SAIFS, East China Normal University
graph neural networks
H
Huan Huo
The University of Technology Sydney, Sydney, New South Wales, Australia
Dingxian Wang
Dingxian Wang
Upwork
G
Guandong Xu
The University of Technology Sydney, Sydney, New South Wales, Australia