MMRole: A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents

📅 2024-08-08
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Existing role-playing agents (RPAs) are confined to textual modalities, limiting their capacity to emulate human multimodal sensory interaction and identity consistency—thereby hindering applications in affective computing and sociological research. To address this, we propose the Multimodal Role-Playing Agent (MRPA) paradigm and introduce the first end-to-end framework: (1) We release MMRole-Data, the first large-scale personalized multimodal role dataset comprising 85 distinct roles, 11K images, and 14K dialogues; (2) We design MMRole-Eval, a comprehensive evaluation suite covering eight metrics across three dimensions, alongside a dedicated reward model; (3) We develop novel methods for multimodal alignment modeling, role-aware dialogue generation, and joint image-text-behavior representation learning. Our implementation—MMRole-Agent—achieves significant improvements over baselines in role consistency, emotional expressiveness, and cross-modal understanding. Empirical analysis further identifies two fundamental bottlenecks: multimodal comprehension fidelity and role stability maintenance.

Technology Category

Application Category

📝 Abstract
Recently, Role-Playing Agents (RPAs) have garnered increasing attention for their potential to deliver emotional value and facilitate sociological research. However, existing studies are primarily confined to the textual modality, unable to simulate humans' multimodal perceptual capabilities. To bridge this gap, we introduce the concept of Multimodal Role-Playing Agents (MRPAs), and propose a comprehensive framework, MMRole, for their development and evaluation, which comprises a personalized multimodal dataset and a robust evaluation approach. Specifically, we construct a large-scale, high-quality dataset, MMRole-Data, consisting of 85 characters, 11K images, and 14K single or multi-turn dialogues. Additionally, we present a robust evaluation approach, MMRole-Eval, encompassing eight metrics across three dimensions, where a reward model is designed to score MRPAs with the constructed ground-truth data for comparison. Moreover, we develop the first specialized MRPA, MMRole-Agent. Extensive evaluation results demonstrate the improved performance of MMRole-Agent and highlight the primary challenges in developing MRPAs, emphasizing the need for enhanced multimodal understanding and role-playing consistency. The data, code, and models are all available at https://github.com/YanqiDai/MMRole.
Problem

Research questions and friction points this paper is trying to address.

Develop multimodal role-playing agents
Evaluate agents with robust metrics
Enhance multimodal understanding and consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal dataset for role-playing
Robust evaluation with multiple metrics
First specialized multimodal role-playing agent
🔎 Similar Papers
No similar papers found.
Y
Yanqi Dai
Gaoling School of Artificial Intelligence, Renmin University of China
H
Huanran Hu
College of Information and Electrical Engineering, China Agricultural University
L
Lei Wang
Gaoling School of Artificial Intelligence, Renmin University of China
S
Shengjie Jin
Gaoling School of Artificial Intelligence, Renmin University of China
X
Xu Chen
Gaoling School of Artificial Intelligence, Renmin University of China
Zhiwu Lu
Zhiwu Lu
Professor, Renmin University of China
Machine LearningComputer VisionLarge Multimodal ModelsVideo Generation