Rethinking Stateful Tool Use in Multi-Turn Dialogues: Benchmarks and Challenges

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing tool-use evaluation benchmarks predominantly focus on single-turn, stateless scenarios, neglecting the dynamic state evolution and tool lifecycle dependencies inherent in multi-turn dialogues. This work introduces DialogTool—the first benchmark explicitly designed for stateful, multi-turn tool usage—alongside VirtualMobile, a configurable virtual mobile environment. DialogTool systematically models the full tool lifecycle across three phases: tool creation, perception/selection/execution, and role-consistent response generation. Its innovations include multi-turn state-tracking annotations, an API simulation execution environment, and a six-task, phase-wise evaluation protocol. We conduct a comprehensive cross-model evaluation across 13 state-of-the-art LLMs. Results reveal significant performance degradation in long-horizon, state-sensitive tool use: all models exhibit consistent accuracy decay across dialogue turns, exposing fundamental limitations in state representation and long-range dependency modeling.

Technology Category

Application Category

📝 Abstract
Existing benchmarks that assess Language Models (LMs) as Language Agents (LAs) for tool use primarily focus on stateless, single-turn interactions or partial evaluations, such as tool selection in a single turn, overlooking the inherent stateful nature of interactions in multi-turn applications. To fulfill this gap, we propose exttt{DialogTool}, a multi-turn dialogue dataset with stateful tool interactions considering the whole life cycle of tool use, across six key tasks in three stages: 1) extit{tool creation}; 2) extit{tool utilization}: tool awareness, tool selection, tool execution; and 3) extit{role-consistent response}: response generation and role play. Furthermore, we build exttt{VirtualMobile} -- an embodied virtual mobile evaluation environment to simulate API calls and assess the robustness of the created APIsfootnote{We will use tools and APIs alternatively, there are no significant differences between them in this paper.}. Taking advantage of these artifacts, we conduct comprehensive evaluation on 13 distinct open- and closed-source LLMs and provide detailed analysis at each stage, revealing that the existing state-of-the-art LLMs still cannot perform well to use tools over long horizons.
Problem

Research questions and friction points this paper is trying to address.

Assessing stateful tool use in multi-turn dialogues
Evaluating whole tool lifecycle across six key tasks
Testing LLM robustness in long-horizon tool interactions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes DialogTool for stateful multi-turn tool interactions
Introduces VirtualMobile for API simulation and evaluation
Evaluates 13 LLMs across tool lifecycle stages
🔎 Similar Papers
No similar papers found.
H
Hongru Wang
The Chinese University of Hong Kong
W
Wenyu Huang
The University of Edinburgh
Y
Yufei Wang
Macquire University
Yuanhao Xi
Yuanhao Xi
University of Goettingen
Large Language Model
Jianqiao Lu
Jianqiao Lu
Researcher at Seed Foundation Model team
Large language modelOnline Matching Theory
H
Huan Zhang
Université de Montréal&MILA
N
Nan Hu
The University of Edinburgh
Z
Zeming Liu
Beihang Univeristy
Jeff Z. Pan
Jeff Z. Pan
Professor of Knowledge Computing, University of Edinburgh
Artificial IntelligenceKnowledge Representation and ReasoningKnowledge Based Learning
K
Kam-Fai Wong
The Chinese University of Hong Kong, MoE Key Laboratory of High Confidence Software Technologies