SimulatorArena: Are User Simulators Reliable Proxies for Multi-Turn Evaluation of AI Assistants?

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical question of whether user simulators can reliably substitute for real users in multi-turn AI assistant evaluation. We introduce SimulatorArena, the first benchmark comprising 909 human–LLM dialogues, focused on math tutoring and document authoring. We propose a conditional LLM-based user simulation method grounded in user personas (e.g., background, communication style) and evaluate simulation fidelity via two complementary metrics: message-level alignment and consistency with human-assigned assistant scores. Empirically, we demonstrate—for the first time—that the best simulator achieves Spearman correlations of 0.7 with human judgments across both tasks, significantly outperforming baselines. Moreover, our framework enables automated, large-scale evaluation of 18 state-of-the-art models—including GPT-5, Claude 4.1 Opus, and Gemini 2.5 Pro. This work establishes a new paradigm and benchmark for reproducible, low-cost, high-fidelity AI assistant evaluation.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used in interactive applications, and human evaluation remains the gold standard for assessing their performance in multi-turn conversations. Since human studies are costly, time-consuming, and hard to reproduce, recent work explores using LLMs to simulate users for automatic assistant evaluation. However, there is no benchmark or systematic study to evaluate whether these simulated users are reliable stand-ins for real users. To address this, we introduce SimulatorArena, a benchmark of 909 annotated human-LLM conversations on two interactive tasks -- math tutoring and document creation. SimulatorArena evaluates simulators based on how closely their messages match human behavior and how well their assistant ratings align with human judgments. Experiments on various simulator methods show that simulators conditioned on user profiles, capturing traits like background and message styles, align closely with human judgments. They reach Spearman's $ρ$ of 0.7 on both tasks, providing a practical, scalable alternative to human evaluation. Using the best simulator for each task, we benchmark 18 assistants, including the latest LLMs such as GPT-5, Claude 4.1 Opus, and Gemini 2.5 Pro.
Problem

Research questions and friction points this paper is trying to address.

Evaluating reliability of user simulators for AI assistant assessment
Benchmarking simulated users against human conversational behavior
Providing scalable alternative to costly human evaluation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

SimulatorArena benchmark evaluates user simulator reliability
Simulators conditioned on user profiles match human judgments
Scalable simulator alternative replaces costly human evaluation
🔎 Similar Papers
No similar papers found.