DeepImageSearch: Benchmarking Multimodal Agents for Context-Aware Image Retrieval in Visual Histories

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing image retrieval methods, which often neglect cross-temporal contextual dependencies in visual streams and struggle to leverage implicit cues for target localization in real-world scenarios. To overcome this, the paper reframes image retrieval as an autonomous exploration task and introduces the first agent-based paradigm for context-aware retrieval. The proposed framework integrates a modular architecture, fine-grained tool invocation, a dual-memory mechanism, and a vision-language model to enable multi-step reasoning and planning. Furthermore, the authors construct DISBench, a new benchmark, and devise a human-in-the-loop pipeline to automatically generate contextually grounded queries. Experimental results demonstrate that the approach poses a significant challenge to current state-of-the-art models, underscoring the pivotal role of agent-based reasoning in next-generation retrieval systems.

Technology Category

Application Category

📝 Abstract
Existing multimodal retrieval systems excel at semantic matching but implicitly assume that query-image relevance can be measured in isolation. This paradigm overlooks the rich dependencies inherent in realistic visual streams, where information is distributed across temporal sequences rather than confined to single snapshots. To bridge this gap, we introduce DeepImageSearch, a novel agentic paradigm that reformulates image retrieval as an autonomous exploration task. Models must plan and perform multi-step reasoning over raw visual histories to locate targets based on implicit contextual cues. We construct DISBench, a challenging benchmark built on interconnected visual data. To address the scalability challenge of creating context-dependent queries, we propose a human-model collaborative pipeline that employs vision-language models to mine latent spatiotemporal associations, effectively offloading intensive context discovery before human verification. Furthermore, we build a robust baseline using a modular agent framework equipped with fine-grained tools and a dual-memory system for long-horizon navigation. Extensive experiments demonstrate that DISBench poses significant challenges to state-of-the-art models, highlighting the necessity of incorporating agentic reasoning into next-generation retrieval systems.
Problem

Research questions and friction points this paper is trying to address.

context-aware image retrieval
visual histories
multimodal agents
temporal dependencies
image retrieval benchmark
Innovation

Methods, ideas, or system contributions that make the work stand out.

context-aware retrieval
multimodal agents
visual history
agentic reasoning
DISBench
🔎 Similar Papers
No similar papers found.
C
Chenlong Deng
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
M
Mengjie Deng
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
Junjie Wu
Junjie Wu
Center for High Pressure Science & Technology Advanced Research
Physics
Dun Zeng
Dun Zeng
OPPO Research Institute
Machine LearningDistributed LearningStochastic Optimization
Teng Wang
Teng Wang
AI Researcher @ OPPO Research Institute
AILLM ReasoningNLP
Q
Qingsong Xie
OPPO Research Institute
J
Jiadeng Huang
OPPO Research Institute
Shengjie Ma
Shengjie Ma
Renmin University of China, GSAI
Information RetrievalRAGLLMKnowledge GraphAI Cross-domain Applicaiton
C
Changwang Zhang
OPPO Research Institute
Z
Zhaoxiang Wang
OPPO Research Institute
Jun Wang
Jun Wang
Futurewei Technologies
Mobile ComputingComputer ArchitectureParallel and Distributed Simulation/ComputingCompiler
Y
Yutao Zhu
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, China
Zhicheng Dou
Zhicheng Dou
Renmin University of China
Information RetrievalRetrieval Augmented GenerationLarge Language ModelsGenerative IR