🤖 AI Summary
Existing role-playing agents (RPAs) are confined to textual modalities, limiting their capacity to emulate human multimodal sensory interaction and identity consistency—thereby hindering applications in affective computing and sociological research. To address this, we propose the Multimodal Role-Playing Agent (MRPA) paradigm and introduce the first end-to-end framework: (1) We release MMRole-Data, the first large-scale personalized multimodal role dataset comprising 85 distinct roles, 11K images, and 14K dialogues; (2) We design MMRole-Eval, a comprehensive evaluation suite covering eight metrics across three dimensions, alongside a dedicated reward model; (3) We develop novel methods for multimodal alignment modeling, role-aware dialogue generation, and joint image-text-behavior representation learning. Our implementation—MMRole-Agent—achieves significant improvements over baselines in role consistency, emotional expressiveness, and cross-modal understanding. Empirical analysis further identifies two fundamental bottlenecks: multimodal comprehension fidelity and role stability maintenance.
📝 Abstract
Recently, Role-Playing Agents (RPAs) have garnered increasing attention for their potential to deliver emotional value and facilitate sociological research. However, existing studies are primarily confined to the textual modality, unable to simulate humans' multimodal perceptual capabilities. To bridge this gap, we introduce the concept of Multimodal Role-Playing Agents (MRPAs), and propose a comprehensive framework, MMRole, for their development and evaluation, which comprises a personalized multimodal dataset and a robust evaluation approach. Specifically, we construct a large-scale, high-quality dataset, MMRole-Data, consisting of 85 characters, 11K images, and 14K single or multi-turn dialogues. Additionally, we present a robust evaluation approach, MMRole-Eval, encompassing eight metrics across three dimensions, where a reward model is designed to score MRPAs with the constructed ground-truth data for comparison. Moreover, we develop the first specialized MRPA, MMRole-Agent. Extensive evaluation results demonstrate the improved performance of MMRole-Agent and highlight the primary challenges in developing MRPAs, emphasizing the need for enhanced multimodal understanding and role-playing consistency. The data, code, and models are all available at https://github.com/YanqiDai/MMRole.