MLLM-Tool: A Multimodal Large Language Model for Tool Agent Learning

📅 2024-01-19
🏛️ IEEE Workshop/Winter Conference on Applications of Computer Vision
📈 Citations: 14
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) rely solely on textual instructions for tool invocation, limiting their ability to accurately infer users’ underlying intentions—particularly under modality ambiguity and when multiple functionally equivalent tools are available. To address this, we propose MM-ToolLLM, the first multimodal LLM explicitly designed for tool calling, integrating vision (ViT) and audio (Whisper) encoders into a tool agent to enable cross-modal intent understanding and robust tool matching. We introduce the first multimodal tool-instruction dataset featuring multiple candidate tools per query, and adopt a joint training strategy combining instruction tuning with multimodal alignment. Experiments demonstrate significant improvements in tool recommendation accuracy, effective resolution of ambiguous queries, and support for multi-solution recommendations among functionally equivalent tools. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Recently, the astonishing performance of large language models (LLMs) in natural language comprehension and generation tasks triggered lots of exploration of using them as central controllers to build agent systems. Multiple studies focus on bridging the LLMs to external tools to extend the application scenarios. However, the current LLMs' ability to perceive tool use is limited to a single text query, which may result in ambiguity in understanding the users' real intentions. LLMs are expected to eliminate that by perceiving the information in the visual-or auditory-grounded instructions. Therefore, in this paper, we propose MLLM-Tool, a system incorporating open-source LLMs and multi-modal encoders so that the learned LLMs can be conscious of multi-modal input instruction and then select the function-matched tool correctly. To facilitate the evaluation of the model's capability, we collect a dataset featuring multi-modal input tools from HuggingFace. Another essential feature of our dataset is that it also contains multiple potential choices for the same instruction due to the existence of identical functions and synonymous functions, which provides more potential solutions for the same query. The experiments reveal that our MLLM-Tool is capable of recommending appropriate tools for multi-modal instructions. Codes and data are available at github.com/MLLM-Tool/MLLM-Tool.
Problem

Research questions and friction points this paper is trying to address.

Enhance LLMs' tool perception beyond text queries
Enable multi-modal input for accurate tool selection
Address ambiguity in user intentions with visual/auditory data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal LLM integrates visual and auditory inputs
Open-source LLMs combined with multi-modal encoders
Dataset includes identical and synonymous tool functions
🔎 Similar Papers
No similar papers found.
C
Chenyu Wang
ShanghaiTech University
Weixin Luo
Weixin Luo
Meituan
Q
Qianyu Chen
ShanghaiTech University
H
Haonan Mai
ShanghaiTech University
J
Jindi Guo
ShanghaiTech University
Sixun Dong
Sixun Dong
Arizona State University
Computer VisonMultimodal LearningVisual Language Model
X
Xiaohua Xuan
UniDT
Zhengxin Li
Zhengxin Li
ShanghaiTech University
Computer visionMachine learning
L
Lin Ma
Shenghua Gao
Shenghua Gao
The University of Hong Kong
Computer visionPattern RecognitionMachine Learning