GC-KBVQA: A New Four-Stage Framework for Enhancing Knowledge Based Visual Question Answering Performance

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
KB-VQA methods often suffer from reasoning biases due to redundant or contextually misaligned external knowledge. To address this, we propose a zero-shot, multimodal-fine-tuning-free framework comprising four stages, centered on a question-aware grounded image description generation mechanism: it jointly leverages fine-grained visual features and dynamically retrieved external knowledge to produce highly relevant, context-aligned prompts. Our approach integrates pretrained large language models with vision-language grounding techniques, combining dynamic knowledge retrieval and structured prompt engineering—without requiring end-to-end multimodal training. Evaluated on multiple KB-VQA benchmarks, it significantly outperforms state-of-the-art methods, achieving superior accuracy, low deployment overhead, and strong cross-task generalization. The code will be publicly released.

Technology Category

Application Category

📝 Abstract
Knowledge-Based Visual Question Answering (KB-VQA) methods focus on tasks that demand reasoning with information extending beyond the explicit content depicted in the image. Early methods relied on explicit knowledge bases to provide this auxiliary information. Recent approaches leverage Large Language Models (LLMs) as implicit knowledge sources. While KB-VQA methods have demonstrated promising results, their potential remains constrained as the auxiliary text provided may not be relevant to the question context, and may also include irrelevant information that could misguide the answer predictor. We introduce a novel four-stage framework called Grounding Caption-Guided Knowledge-Based Visual Question Answering (GC-KBVQA), which enables LLMs to effectively perform zero-shot VQA tasks without the need for end-to-end multimodal training. Innovations include grounding question-aware caption generation to move beyond generic descriptions and have compact, yet detailed and context-rich information. This is combined with knowledge from external sources to create highly informative prompts for the LLM. GC-KBVQA can address a variety of VQA tasks, and does not require task-specific fine-tuning, thus reducing both costs and deployment complexity by leveraging general-purpose, pre-trained LLMs. Comparison with competing KB-VQA methods shows significantly improved performance. Our code will be made public.
Problem

Research questions and friction points this paper is trying to address.

Improves relevance of auxiliary text in KB-VQA tasks
Reduces irrelevant information误导ing answer prediction
Enables zero-shot VQA without multimodal training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Four-stage framework for zero-shot VQA
Question-aware caption generation for context
External knowledge integration via LLMs
🔎 Similar Papers
No similar papers found.