🤖 AI Summary
How can large language models (LLMs) be endowed with multimodal perception and reasoning capabilities—without training large-scale vision-language models (VLMs)—while preserving their inherent textual knowledge and reasoning strengths?
Method: We propose BeMyEyes, a modular multi-agent framework that decouples perception and reasoning via a collaborative dialogue between a lightweight perception agent (a small VLM) and a pure-text reasoning agent (an LLM), enabled by cross-modal alignment through data synthesis and supervised fine-tuning.
Contribution/Results: Our approach avoids end-to-end training of monolithic multimodal foundation models, enabling flexible extension to new modalities and domains. Experiments demonstrate that the combination of DeepSeek-R1 (text-only) and Qwen2.5-VL-7B (vision-language) outperforms strong closed-source baselines—including GPT-4o—across diverse multimodal benchmarks, validating the efficacy of modular, agent-based multimodal reasoning.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in challenging, knowledge-intensive reasoning tasks. However, extending LLMs to perceive and reason over a new modality (e.g., vision), often requires costly development of large-scale vision language models (VLMs) with LLMs as backbones. Smaller VLMs are more efficient and adaptable but often lack the broad knowledge and reasoning capabilities of frontier LLMs. In this work, we propose BeMyEyes, a modular, multi-agent framework for extending LLMs to multimodal reasoning by orchestrating collaboration between efficient, adaptable VLMs as perceivers and powerful LLMs as reasoners through conversations. We then introduce a data synthesis and supervised fine-tuning pipeline to train the perceiver agent to effectively collaborate with the reasoner agent. By combining the complementary strengths of perception and reasoning agents, BeMyEyes avoids the need for training large-scale multimodal models, preserves the generalization and reasoning capabilities of LLMs, and allows flexible extension to new domains and modalities. Experiments show that our framework unlocks the multimodal reasoning capabilities for LLMs, enabling a lightweight and fully open-source solution, i.e. equipping text-only DeepSeek-R1 with Qwen2.5-VL-7B perceiver, to outperform large-scale proprietary VLMs such as GPT-4o on a wide range of knowledge-intensive multimodal tasks. These results demonstrate the effectiveness, modularity, and scalability of our multi-agent approach for building future multimodal reasoning systems.