🤖 AI Summary
This work addresses the challenge of effectively retrieving and integrating missing visual cues to support accurate reasoning in visual question answering. To this end, we propose the R3G framework, which synergistically optimizes reasoning and retrieval by generating a reasoning plan that guides a two-stage image retrieval process—comprising coarse filtering followed by fine-grained re-ranking. R3G employs a modular architecture that combines a multimodal large language model with a sufficiency-aware re-ranking mechanism to dynamically incorporate relevant visual evidence, thereby enhancing answer generation. Evaluated on the MRAG-Bench benchmark, R3G consistently improves performance across six multimodal large language models and nine sub-scenarios, achieving state-of-the-art overall accuracy.
📝 Abstract
Vision-centric retrieval for VQA requires retrieving images to supply missing visual cues and integrating them into the reasoning process. However, selecting the right images and integrating them effectively into the model's reasoning remains challenging.To address this challenge, we propose R3G, a modular Reasoning-Retrieval-Reranking framework.It first produces a brief reasoning plan that specifies the required visual cues, then adopts a two-stage strategy, with coarse retrieval followed by fine-grained reranking, to select evidence images.On MRAG-Bench, R3G improves accuracy across six MLLM backbones and nine sub-scenarios, achieving state-of-the-art overall performance. Ablations show that sufficiency-aware reranking and reasoning steps are complementary, helping the model both choose the right images and use them well. We release code and data at https://github.com/czh24/R3G.