Video2Roleplay: A Multimodal Dataset and Framework for Video-Guided Role-playing Agents

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing role-playing agents (RPAs) rely on static role configurations and lack real-time perception and adaptive response to dynamic contexts. This paper introduces a Dynamic Role Profiling framework—the first to incorporate video modality into role-playing—enabling temporal evolution of character behavior and immersive, context-aware interaction. Our key contributions are threefold: (1) We construct Role-playing-Video60k, the first large-scale video-dialogue dataset specifically designed for role-playing; (2) We propose a novel multimodal architecture integrating adaptive temporal frame sampling, LLM-driven content summarization, and dual-modal (video + language) role representation; (3) Extensive evaluation demonstrates significant improvements across eight quantitative metrics—including response quality, contextual consistency, and behavioral coherence—validating that dynamic contextual perception is critical for enhancing RPA intelligence.

Technology Category

Application Category

📝 Abstract
Role-playing agents (RPAs) have attracted growing interest for their ability to simulate immersive and interactive characters. However, existing approaches primarily focus on static role profiles, overlooking the dynamic perceptual abilities inherent to humans. To bridge this gap, we introduce the concept of dynamic role profiles by incorporating video modality into RPAs. To support this, we construct Role-playing-Video60k, a large-scale, high-quality dataset comprising 60k videos and 700k corresponding dialogues. Based on this dataset, we develop a comprehensive RPA framework that combines adaptive temporal sampling with both dynamic and static role profile representations. Specifically, the dynamic profile is created by adaptively sampling video frames and feeding them to the LLM in temporal order, while the static profile consists of (1) character dialogues from training videos during fine-tuning, and (2) a summary context from the input video during inference. This joint integration enables RPAs to generate greater responses. Furthermore, we propose a robust evaluation method covering eight metrics. Experimental results demonstrate the effectiveness of our framework, highlighting the importance of dynamic role profiles in developing RPAs.
Problem

Research questions and friction points this paper is trying to address.

Incorporating video modality into role-playing agents
Creating dynamic role profiles with adaptive temporal sampling
Developing a multimodal dataset and evaluation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic role profiles using video modality
Adaptive temporal sampling of video frames
Combining dynamic and static profile representations
🔎 Similar Papers
No similar papers found.
X
Xueqiao Zhang
Zhejiang University
C
Chao Zhang
Zhejiang University
Jingtao Xu
Jingtao Xu
Baidu
Computer visionimage processingmachine learning
Yifan Zhu
Yifan Zhu
Beijing University of Posts and Telecommunications
PEFT of LLMsGraph RAGGraph mining
X
Xin Shi
Zhejiang University
Y
Yi Yang
Zhejiang University
Y
Yawei Luo
Zhejiang University