🤖 AI Summary
High-quality dialogue data is critical for task-oriented dialogue systems, yet human annotation is prohibitively expensive; it remains unclear whether LLM-generated synthetic dialogues can reliably substitute for real human interactions. Method: We introduce the first multidimensional user behavior analysis framework for task-oriented dialogue—spanning strategy, interaction style, and evaluation—and conduct parallel human–agent dialogue collection and quantitative comparison across four representative scenarios, employing behavioral dimension modeling and controlled experiments. Contribution/Results: Our analysis reveals systematic agent-user biases in feedback polarity, linguistic style, and hallucination awareness; however, agents closely match human users in problem-solving effectiveness and search strategy. This work establishes a reproducible, scalable analytical paradigm and empirical benchmark for LLM-based user simulation in task-oriented dialogue research.
📝 Abstract
Task-oriented conversational systems are essential for efficiently addressing diverse user needs, yet their development requires substantial amounts of high-quality conversational data that is challenging and costly to obtain. While large language models (LLMs) have demonstrated potential in generating synthetic conversations, the extent to which these agent-generated interactions can effectively substitute real human conversations remains unclear. This work presents the first systematic comparison between LLM-simulated users and human users in personalized task-oriented conversations. We propose a comprehensive analytical framework encompassing three key aspects (conversation strategy, interaction style, and conversation evaluation) and ten distinct dimensions for evaluating user behaviors, and collect parallel conversational datasets from both human users and LLM agent users across four representative scenarios under identical conditions. Our analysis reveals significant behavioral differences between the two user types in problem-solving approaches, question broadness, user engagement, context dependency, feedback polarity and promise, language style, and hallucination awareness. We found consistency in the agent users and human users across the depth-first or breadth-first dimensions, as well as the usefulness dimensions. These findings provide critical insights for advancing LLM-based user simulation. Our multi-dimensional taxonomy constructed a generalizable framework for analyzing user behavior patterns, offering insights from LLM agent users and human users. By this work, we provide perspectives on rethinking how to use user simulation in conversational systems in the future.