RMTBench: Benchmarking LLMs Through Multi-Turn User-Centric Role-Playing

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing role-playing evaluation benchmarks predominantly adopt a character-centric, single-turn paradigm, failing to reflect authentic user motivations and multi-turn interaction dynamics. To address this gap, we propose RMTBench—the first user-centered, bilingual, multi-turn role-playing benchmark, comprising 80 diverse personas and over 8,000 motivation-driven dialogue turns. Our method introduces: (1) an intent-oriented evaluation framework, where multi-turn dialogues are explicitly structured around user-defined motivations; (2) hybrid persona modeling—integrating both customized and abstract role representations—coupled with LLM-based automated scoring; and (3) the release of a large-scale, bilingual benchmark dataset. Experimental results demonstrate that RMTBench significantly improves assessment validity for LLMs’ role consistency, motivation responsiveness, and interaction naturalness, thereby bridging the critical gap between academic evaluation and real-world deployment.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Language Models (LLMs) have shown outstanding potential for role-playing applications. Evaluating these capabilities is becoming crucial yet remains challenging. Existing benchmarks mostly adopt a extbf{character-centric} approach, simplify user-character interactions to isolated Q&A tasks, and fail to reflect real-world applications. To address this limitation, we introduce RMTBench, a comprehensive extbf{user-centric} bilingual role-playing benchmark featuring 80 diverse characters and over 8,000 dialogue rounds. RMTBench includes custom characters with detailed backgrounds and abstract characters defined by simple traits, enabling evaluation across various user scenarios. Our benchmark constructs dialogues based on explicit user motivations rather than character descriptions, ensuring alignment with practical user applications. Furthermore, we construct an authentic multi-turn dialogue simulation mechanism. With carefully selected evaluation dimensions and LLM-based scoring, this mechanism captures the complex intention of conversations between the user and the character. By shifting focus from character background to user intention fulfillment, RMTBench bridges the gap between academic evaluation and practical deployment requirements, offering a more effective framework for assessing role-playing capabilities in LLMs. All code and datasets will be released soon.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' role-playing capabilities in user-centric scenarios
Addressing limitations of character-centric benchmarks in real-world applications
Developing a multi-turn dialogue framework for practical user intention fulfillment
Innovation

Methods, ideas, or system contributions that make the work stand out.

User-centric bilingual role-playing benchmark
Multi-turn dialogue simulation mechanism
LLM-based scoring for intention fulfillment
🔎 Similar Papers
No similar papers found.