Be My Eyes: Extending Large Language Models to New Modalities Through Multi-Agent Collaboration

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How can large language models (LLMs) be endowed with multimodal perception and reasoning capabilities—without training large-scale vision-language models (VLMs)—while preserving their inherent textual knowledge and reasoning strengths? Method: We propose BeMyEyes, a modular multi-agent framework that decouples perception and reasoning via a collaborative dialogue between a lightweight perception agent (a small VLM) and a pure-text reasoning agent (an LLM), enabled by cross-modal alignment through data synthesis and supervised fine-tuning. Contribution/Results: Our approach avoids end-to-end training of monolithic multimodal foundation models, enabling flexible extension to new modalities and domains. Experiments demonstrate that the combination of DeepSeek-R1 (text-only) and Qwen2.5-VL-7B (vision-language) outperforms strong closed-source baselines—including GPT-4o—across diverse multimodal benchmarks, validating the efficacy of modular, agent-based multimodal reasoning.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in challenging, knowledge-intensive reasoning tasks. However, extending LLMs to perceive and reason over a new modality (e.g., vision), often requires costly development of large-scale vision language models (VLMs) with LLMs as backbones. Smaller VLMs are more efficient and adaptable but often lack the broad knowledge and reasoning capabilities of frontier LLMs. In this work, we propose BeMyEyes, a modular, multi-agent framework for extending LLMs to multimodal reasoning by orchestrating collaboration between efficient, adaptable VLMs as perceivers and powerful LLMs as reasoners through conversations. We then introduce a data synthesis and supervised fine-tuning pipeline to train the perceiver agent to effectively collaborate with the reasoner agent. By combining the complementary strengths of perception and reasoning agents, BeMyEyes avoids the need for training large-scale multimodal models, preserves the generalization and reasoning capabilities of LLMs, and allows flexible extension to new domains and modalities. Experiments show that our framework unlocks the multimodal reasoning capabilities for LLMs, enabling a lightweight and fully open-source solution, i.e. equipping text-only DeepSeek-R1 with Qwen2.5-VL-7B perceiver, to outperform large-scale proprietary VLMs such as GPT-4o on a wide range of knowledge-intensive multimodal tasks. These results demonstrate the effectiveness, modularity, and scalability of our multi-agent approach for building future multimodal reasoning systems.
Problem

Research questions and friction points this paper is trying to address.

Extending LLMs to perceive new modalities without costly multimodal training
Enabling lightweight VLMs to collaborate with powerful LLMs for reasoning
Preserving LLM reasoning capabilities while adding multimodal perception
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent collaboration between perceivers and reasoners
Data synthesis and fine-tuning pipeline for agent training
Modular framework avoiding large-scale multimodal model training
🔎 Similar Papers
No similar papers found.