Human vs. Agent in Task-Oriented Conversations

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-quality dialogue data is critical for task-oriented dialogue systems, yet human annotation is prohibitively expensive; it remains unclear whether LLM-generated synthetic dialogues can reliably substitute for real human interactions. Method: We introduce the first multidimensional user behavior analysis framework for task-oriented dialogue—spanning strategy, interaction style, and evaluation—and conduct parallel human–agent dialogue collection and quantitative comparison across four representative scenarios, employing behavioral dimension modeling and controlled experiments. Contribution/Results: Our analysis reveals systematic agent-user biases in feedback polarity, linguistic style, and hallucination awareness; however, agents closely match human users in problem-solving effectiveness and search strategy. This work establishes a reproducible, scalable analytical paradigm and empirical benchmark for LLM-based user simulation in task-oriented dialogue research.

Technology Category

Application Category

📝 Abstract
Task-oriented conversational systems are essential for efficiently addressing diverse user needs, yet their development requires substantial amounts of high-quality conversational data that is challenging and costly to obtain. While large language models (LLMs) have demonstrated potential in generating synthetic conversations, the extent to which these agent-generated interactions can effectively substitute real human conversations remains unclear. This work presents the first systematic comparison between LLM-simulated users and human users in personalized task-oriented conversations. We propose a comprehensive analytical framework encompassing three key aspects (conversation strategy, interaction style, and conversation evaluation) and ten distinct dimensions for evaluating user behaviors, and collect parallel conversational datasets from both human users and LLM agent users across four representative scenarios under identical conditions. Our analysis reveals significant behavioral differences between the two user types in problem-solving approaches, question broadness, user engagement, context dependency, feedback polarity and promise, language style, and hallucination awareness. We found consistency in the agent users and human users across the depth-first or breadth-first dimensions, as well as the usefulness dimensions. These findings provide critical insights for advancing LLM-based user simulation. Our multi-dimensional taxonomy constructed a generalizable framework for analyzing user behavior patterns, offering insights from LLM agent users and human users. By this work, we provide perspectives on rethinking how to use user simulation in conversational systems in the future.
Problem

Research questions and friction points this paper is trying to address.

Evaluating if LLM-simulated users can effectively replace real human conversations
Systematically comparing human versus agent behaviors in task-oriented dialogues
Developing a multi-dimensional framework to analyze user behavior patterns
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematically compared LLM-simulated and human users
Proposed multi-dimensional framework for analyzing user behavior
Collected parallel datasets across four representative scenarios
🔎 Similar Papers
No similar papers found.
Z
Zhefan Wang
DCST, Tsinghua University
N
Ning Geng
Emory University
Z
Zhiqiang Guo
DCST, Tsinghua University
Weizhi Ma
Weizhi Ma
Tsinghua University
LLM and AgentsRecommendationAI for Healthcare
M
Min Zhang
DCST, Tsinghua University