Empowering Multimodal LLMs with External Tools: A Comprehensive Survey

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) face critical limitations in low-quality multimodal data, poor generalization on complex tasks, and inadequate evaluation frameworks—hindering their reliable deployment. This paper presents the first systematic survey of tool-augmented MLLMs, organized along four dimensions: data construction, task enhancement, evaluation methodology, and future challenges. We propose a unified collaborative framework integrating multimodal encoders, large language models, and external tools—including APIs, domain-specific expert models, and knowledge bases—to significantly improve cross-modal understanding and reasoning. Furthermore, we develop an open-source toolkit repository to empirically validate diverse tool-augmentation strategies. Our work provides both theoretical foundations and practical guidelines for high-fidelity multimodal data generation, robust complex-scenario problem solving, and trustworthy evaluation of MLLMs—advancing multimodal AI toward greater reliability, interpretability, and scalability.

Technology Category

Application Category

📝 Abstract
By integrating the perception capabilities of multimodal encoders with the generative power of Large Language Models (LLMs), Multimodal Large Language Models (MLLMs), exemplified by GPT-4V, have achieved great success in various multimodal tasks, pointing toward a promising pathway to artificial general intelligence. Despite this progress, the limited quality of multimodal data, poor performance on many complex downstream tasks, and inadequate evaluation protocols continue to hinder the reliability and broader applicability of MLLMs across diverse domains. Inspired by the human ability to leverage external tools for enhanced reasoning and problem-solving, augmenting MLLMs with external tools (e.g., APIs, expert models, and knowledge bases) offers a promising strategy to overcome these challenges. In this paper, we present a comprehensive survey on leveraging external tools to enhance MLLM performance. Our discussion is structured along four key dimensions about external tools: (1) how they can facilitate the acquisition and annotation of high-quality multimodal data; (2) how they can assist in improving MLLM performance on challenging downstream tasks; (3) how they enable comprehensive and accurate evaluation of MLLMs; (4) the current limitations and future directions of tool-augmented MLLMs. Through this survey, we aim to underscore the transformative potential of external tools in advancing MLLM capabilities, offering a forward-looking perspective on their development and applications. The project page of this paper is publicly available athttps://github.com/Lackel/Awesome-Tools-for-MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing MLLMs with external tools for better performance
Improving multimodal data quality and annotation processes
Addressing evaluation challenges in MLLM reliability and applicability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrate multimodal encoders with LLMs
Augment MLLMs using external tools
Survey tool-enhanced MLLM performance
🔎 Similar Papers
No similar papers found.
Wenbin An
Wenbin An
Xi'an Jiaotong University
Transfer LearningWeakly Supervised LearningNatural Language Processing
J
Jiahao Nie
Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore, 639798, Singapore
Yaqiang Wu
Yaqiang Wu
Lenovo
F
Feng Tian
School of Computer Science and Technology, Xi’an Jiaotong University, Xi’an, 710049, China
Shijian Lu
Shijian Lu
College of Computing and Data Science, NTU
Image and video analyticscomputer visionmachine learning
Qinghua Zheng
Qinghua Zheng
Xi'an Jiaotong University
AI