🤖 AI Summary
Existing multimodal retrieval models exhibit limited performance on queries requiring deep image-text reasoning, struggling to effectively integrate visual and textual information. To address this, this work proposes HIVE, a novel framework that introduces, for the first time, a hypothesis-driven iterative visual evidence retrieval mechanism. HIVE explicitly models the vision-language reasoning process through four stages: initial retrieval, compensatory query generation, secondary retrieval, and LLM-based verification with reranking. By synergistically combining large language models with multimodal and text-only retrievers, HIVE supports plug-and-play deployment. Evaluated on the MM-BRIGHT benchmark, HIVE achieves an nDCG@10 of 41.7, outperforming the best text-only and multimodal baselines by 9.5 and 14.1 points, respectively, with particularly pronounced gains in complex domains such as gaming and chemistry.
📝 Abstract
Multimodal retrieval models fail on reasoning-intensive queries where images (diagrams, charts, screenshots) must be deeply integrated with text to identify relevant documents -- the best multimodal model achieves only 27.6 nDCG@10 on MM-BRIGHT, underperforming even strong text-only retrievers (32.2). We introduce \textbf{HIVE} (\textbf{H}ypothesis-driven \textbf{I}terative \textbf{V}isual \textbf{E}vidence Retrieval), a plug-and-play framework that injects explicit visual-text reasoning into a retriever via LLMs. HIVE operates in four stages: (1) initial retrieval over the corpus, (2) LLM-based compensatory query synthesis that explicitly articulates visual and logical gaps observed in top-$k$ candidates, (3) secondary retrieval with the refined query, and (4) LLM verification and reranking over the union of candidates. Evaluated on the multimodal-to-text track of MM-BRIGHT (2,803 real-world queries across 29 technical domains), HIVE achieves a new state-of-the-art aggregated nDCG@10 of \textbf{41.7} -- a \textbf{+9.5} point gain over the best text-only model (DiVeR: 32.2) and \textbf{+14.1} over the best multimodal model (Nomic-Vision: 27.6), where our reasoning-enhanced base retriever contributes 33.2 and the HIVE framework adds a further \textbf{+8.5} points -- with particularly strong results in visually demanding domains (Gaming: 68.2, Chemistry: 42.5, Sustainability: 49.4). Compatible with both standard and reasoning-enhanced retrievers, HIVE demonstrates that LLM-mediated visual hypothesis generation and verification can substantially close the multimodal reasoning gap in retrieval. https://github.com/mm-bright/multimodal-reasoning-retrieval