🤖 AI Summary
This work addresses the high training costs, instability, and cold-start challenges inherent in existing approaches that rely on large-scale supervised trajectories or reinforcement learning. The authors propose a training-agnostic cross-modal model fusion paradigm that integrates a text-based search agent with a foundational vision-language model, endowing it with autonomous search capabilities without requiring additional multimodal training data. The core innovation lies in the design of a saliency-guided Optimal Brain Merging (OBM) algorithm, which effectively mitigates parameter interference, complemented by few-shot calibration and a multi-step reasoning framework. Evaluated on benchmarks such as InfoSeek and MMSearch, the method establishes a strong zero-shot performance baseline and, when used as a warm-up strategy, significantly accelerates convergence and improves peak accuracy.
📝 Abstract
Recent advances in Vision-Language Models (VLMs) have motivated the development of multi-modal search agents that can actively invoke external search tools and integrate retrieved evidence through multi-step reasoning. While promising, existing approaches typically rely on large-scale supervised trajectories or expensive reinforcement learning (RL), leading to high training cost, instability, and a severe cold-start problem for standard VLMs. We propose a training-free paradigm to empower VLMs with autonomous search capabilities via cross-modal model merging. By fusing a text-based search agent with a base VLM, we show that multi-modal search capabilities can be effectively composed without any additional multi-modal training data. To mitigate parameter interference during cross-modal integration, we introduce Optimal Brain Merging (OBM), a saliency-aware merging algorithm that identifies task-critical parameters based on their impact on model loss using only a small set of calibration samples. Extensive experiments on search-intensive benchmarks (e.g., InfoSeek, MMSearch) reveal that: (1) Model merging secures a reasonable performance floor as a zero-shot agent, with OBM achieving superior search rates; (2) OBM significantly raises the performance ceiling as a warm-start strategy, achieving faster convergence and higher peak accuracy than standard VLM initialization.