🤖 AI Summary
Existing agent evaluation methods lack a unified framework capable of jointly accounting for user roles, expertise levels, and multidimensional performance metrics. This work proposes the TED framework, which uniquely incorporates user expertise into the evaluation paradigm by leveraging reusable user-role templates and natural language subgoals to simulate interactive dialogues. The framework integrates LLM-as-a-judge automated scoring with an inconsistency-based error diagnosis mechanism to holistically assess dialogue quality, turn efficiency, and intermediate progress. Experimental results reveal significant performance disparities across user groups with varying expertise. Guided by diagnostic feedback from TED, subsequent agent refinements yield an 8–10% improvement in key evaluation metrics.
📝 Abstract
Agent applications are increasingly adopted to automate workflows across diverse tasks. However, due to the heterogeneous domains they operate in, it is challenging to create a scalable evaluation framework. Prior works each employ their own methods to determine task success, such as database lookups, regex match, etc., adding complexity to the development of a unified agent evaluation approach. Moreover, they do not systematically account for the user's role nor expertise in the interaction, providing incomplete insights into the agent's performance. We argue that effective agent evaluation goes beyond correctness alone, incorporating conversation quality, efficiency and systematic diagnosis of agent errors. To address this, we introduce the TED framework (Talk, Evaluate, Diagnose). (1) Talk: We leverage reusable, generic expert and non-expert user persona templates for user-agent interaction. (2) Evaluate: We adapt existing datasets by representing subgoals-such as tool signatures, and responses-as natural language grading notes, evaluated automatically with LLM-as-a-judge. We propose new metrics that capture both turn efficiency and intermediate progress of the agent complementing the user-aware setup. (3) Diagnose: We introduce an automated error analysis tool that analyzes the inconsistencies of the judge and agents, uncovering common errors, and providing actionable feedback for agent improvement. We show that our TED framework reveals new insights regarding agent performance across models and user expertise levels. We also demonstrate potential gains in agent performance with peaks of 8-10% on our proposed metrics after incorporating the identified error remedies into the agent's design.