Multi-Faceted Evaluation of Tool-Augmented Dialogue Systems

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Evaluating tool-augmented dialogue systems remains challenging: existing metrics—such as user satisfaction or single-step tool-call accuracy—fail to detect latent errors where agents misinterpret tool outputs yet receive superficially “satisfactory” user feedback across multi-turn interactions. To address this, we introduce TRACE, the first synthetic benchmark specifically designed to model diverse error propagation pathways in such systems. We further propose SCOPE, an automated evaluation framework that integrates causal reasoning–driven error pattern mining with multi-granularity evaluation rule generation. Its core innovation lies in the first systematic formalization of the tripartite error propagation mechanism among tools, agents, and users—enabling precise identification of “superficially satisfactory but fundamentally erroneous” interactions. Experiments demonstrate that SCOPE achieves significantly higher coverage of latent errors compared to baseline methods, establishing a new standard for robust evaluation of tool-augmented dialogue systems.

Technology Category

Application Category

📝 Abstract
Evaluating conversational AI systems that use external tools is challenging, as errors can arise from complex interactions among user, agent, and tools. While existing evaluation methods assess either user satisfaction or agents' tool-calling capabilities, they fail to capture critical errors in multi-turn tool-augmented dialogues-such as when agents misinterpret tool results yet appear satisfactory to users. We introduce TRACE, a benchmark of systematically synthesized tool-augmented conversations covering diverse error cases, and SCOPE, an evaluation framework that automatically discovers diverse error patterns and evaluation rubrics in tool-augmented dialogues. Experiments show SCOPE significantly outperforms the baseline, particularly on challenging cases where user satisfaction signals are misleading.
Problem

Research questions and friction points this paper is trying to address.

Evaluating complex interactions in tool-augmented dialogue systems
Identifying critical errors masked by user satisfaction signals
Automating discovery of diverse error patterns in multi-turn dialogues
Innovation

Methods, ideas, or system contributions that make the work stand out.

TRACE benchmark with synthesized tool-augmented conversations
SCOPE framework automatically discovers dialogue error patterns
System outperforms baselines on misleading satisfaction cases
🔎 Similar Papers
No similar papers found.