🤖 AI Summary
This work addresses the hallucination problem in visual question answering (VQA) models, which often generate responses inconsistent with the input image or factual knowledge due to overreliance on internal parameters. While conventional retrieval-augmented generation (RAG) methods attempt to mitigate this issue, they frequently introduce irrelevant or conflicting external information. To overcome these limitations, the paper proposes Multimodal Adaptive RAG (MMA-RAG), which, for the first time, leverages intermediate vision–language joint representations within the model to construct a dynamic decision classifier. This classifier adaptively triggers external retrieval based on the model’s internal confidence. By integrating reverse image retrieval with representation learning, MMA-RAG enables precise control over multimodal retrieval. Experiments on three VQA benchmarks demonstrate significant improvements in answer accuracy, and ablation studies confirm the critical role of internal representations in enabling adaptive retrieval decisions.
📝 Abstract
Visual Question Answering systems face reliability issues due to hallucinations, where models generate answers misaligned with visual input or factual knowledge. While Retrieval Augmented Generation frameworks mitigate this issue by incorporating external knowledge, static retrieval often introduces irrelevant or conflicting content, particularly in visual RAG settings where visually similar but semantically incorrect evidence may be retrieved. To address this, we propose Multimodal Adaptive RAG (MMA-RAG), which dynamically assesses the confidence in the internal knowledge of the model to decide whether to incorporate the retrieved external information into the generation process. Central to MMA-RAG is a decision classifier trained through a layer-wise analysis, which leverages joint internal visual and textual representations to guide the use of reverse image retrieval. Experiments demonstrated that the model achieves a significant improvement in response performance in three VQA datasets. Meanwhile, ablation studies highlighted the importance of internal representations in adaptive retrieval decisions. In general, the experimental results demonstrated that MMA-RAG effectively balances external knowledge utilization and inference robustness in diverse multimodal scenarios.