Multi-Mission Tool Bench: Assessing the Robustness of LLM based Agents through Related and Dynamic Missions

📅 2025-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM agent benchmarks predominantly focus on static, single-task scenarios, failing to adequately assess agent robustness in realistic, dynamic, multi-task environments. Method: We introduce the first benchmark for evaluating LLM agents’ tool-use capabilities under multi-task dynamic evolution. It features an innovatively designed test suite comprising interdependent, switchable multi-mission scenarios. Our approach establishes a novel multi-mission collaborative evaluation paradigm, integrating task-relation graph modeling, dynamic decision-tree assessment, tool-call chain tracing, and joint efficiency–accuracy metrics; we further propose a multi-agent collaborative data generation framework. Results: Extensive experiments across十余 open- and closed-weight LLMs demonstrate that the benchmark effectively identifies three core determinants of robustness: contextual transfer capability, state persistence mechanisms, and tool-memory consistency.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) demonstrate strong potential as agents for tool invocation due to their advanced comprehension and planning capabilities. Users increasingly rely on LLM-based agents to solve complex missions through iterative interactions. However, existing benchmarks predominantly access agents in single-mission scenarios, failing to capture real-world complexity. To bridge this gap, we propose the Multi-Mission Tool Bench. In the benchmark, each test case comprises multiple interrelated missions. This design requires agents to dynamically adapt to evolving demands. Moreover, the proposed benchmark explores all possible mission-switching patterns within a fixed mission number. Specifically, we propose a multi-agent data generation framework to construct the benchmark. We also propose a novel method to evaluate the accuracy and efficiency of agent decisions with dynamic decision trees. Experiments on diverse open-source and closed-source LLMs reveal critical factors influencing agent robustness and provide actionable insights to the tool invocation society.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLM agent robustness in multi-mission scenarios
Evaluating dynamic adaptation to interrelated mission demands
Developing benchmarks for real-world tool invocation complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-mission benchmark for dynamic agent evaluation
Multi-agent data generation framework construction
Dynamic decision trees for decision accuracy
🔎 Similar Papers
No similar papers found.
P
PeiJie Yu
Tencent HunYuan
Y
Yifan Yang
Tencent HunYuan
J
Jinjian Li
Tencent HunYuan
Z
Zelong Zhang
Tencent HunYuan
Haorui Wang
Haorui Wang
PhD student, Gatech
Machine LearningLarge Language ModelsDecision MakingUncertainty Quantification
X
Xiao Feng
Tencent HunYuan
F
Feng Zhang
Tencent HunYuan