🤖 AI Summary
This work addresses the limitations of existing knowledge-based visual question answering (VQA) approaches, which often introduce noise during article- or paragraph-level knowledge filtering and incur high computational costs when leveraging multimodal large language models. To overcome these challenges, the authors propose a lightweight, question-oriented knowledge filtering framework that integrates a trainable Question-Focused Filter (QFF) with a Chunk-based Dynamic multi-Article selection module (CDA). This design enables fine-grained, cross-article knowledge retrieval while maintaining low computational overhead. Experimental results demonstrate that the proposed method significantly improves answer accuracy, outperforming state-of-the-art models by 4.9% on the E-VQA dataset and by 3.8% on InfoSeek, all while preserving efficiency.
📝 Abstract
Knowledge-based Visual Question Answering (KB-VQA) aims to answer questions by integrating images with external knowledge. Effective knowledge filtering is crucial for improving accuracy. Typical filtering methods use similarity metrics to locate relevant article sections from one article, leading to information selection errors at the article and intra-article levels. Although recent explorations of Multimodal Large Language Model (MLLM)-based filtering methods demonstrate superior semantic understanding and cross-article filtering capabilities, their high computational cost limits practical application. To address these issues, this paper proposes a question-focused filtering method. This approach can perform question-focused, cross-article filtering, efficiently obtaining high-quality filtered knowledge while keeping computational costs comparable to typical methods. Specifically, we design a trainable Question-Focused Filter (QFF) and a Chunk-based Dynamic Multi-Article Selection (CDA) module, which collectively alleviate information selection errors at both the article and intra-article levels. Experiments show that our method outperforms current state-of-the-art models by 4.9% on E-VQA and 3.8% on InfoSeek, validating its effectiveness. The code is publicly available at: https://github.com/leaffeall/QKVQA.