🤖 AI Summary
Evaluating tool-augmented dialogue systems remains challenging: existing metrics—such as user satisfaction or single-step tool-call accuracy—fail to detect latent errors where agents misinterpret tool outputs yet receive superficially “satisfactory” user feedback across multi-turn interactions. To address this, we introduce TRACE, the first synthetic benchmark specifically designed to model diverse error propagation pathways in such systems. We further propose SCOPE, an automated evaluation framework that integrates causal reasoning–driven error pattern mining with multi-granularity evaluation rule generation. Its core innovation lies in the first systematic formalization of the tripartite error propagation mechanism among tools, agents, and users—enabling precise identification of “superficially satisfactory but fundamentally erroneous” interactions. Experiments demonstrate that SCOPE achieves significantly higher coverage of latent errors compared to baseline methods, establishing a new standard for robust evaluation of tool-augmented dialogue systems.
📝 Abstract
Evaluating conversational AI systems that use external tools is challenging, as errors can arise from complex interactions among user, agent, and tools. While existing evaluation methods assess either user satisfaction or agents' tool-calling capabilities, they fail to capture critical errors in multi-turn tool-augmented dialogues-such as when agents misinterpret tool results yet appear satisfactory to users. We introduce TRACE, a benchmark of systematically synthesized tool-augmented conversations covering diverse error cases, and SCOPE, an evaluation framework that automatically discovers diverse error patterns and evaluation rubrics in tool-augmented dialogues. Experiments show SCOPE significantly outperforms the baseline, particularly on challenging cases where user satisfaction signals are misleading.