PsyMem: Fine-grained psychological alignment and Explicit Memory Control for Advanced Role-Playing LLMs

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current role-playing LLMs suffer from two critical limitations: superficial character modeling—relying solely on surface-level textual descriptions—and insufficient memory consistency—employing implicit knowledge or weak retrieval augmentation without explicit alignment—thereby undermining reliability in high-stakes applications such as trustworthy social simulation. To address these, we propose PsyMem, the first framework to explicitly model characters using 26 fine-grained psychological dimensions and introduce a memory alignment training mechanism that enables dynamic, controllable memory retrieval during inference. Built upon Qwen2.5-7B-Instruct, PsyMem is trained via psychometric feature injection and memory consistency supervision on a novel, self-constructed novel-derived dataset comprising 5,414 characters and 38,962 dialogues. Experiments demonstrate that PsyMem-Qwen significantly outperforms baselines in character fidelity and anthropomorphism metrics, while exhibiting markedly improved reliability in trustworthy social simulation tasks.

Technology Category

Application Category

📝 Abstract
Existing LLM-based role-playing methods often rely on superficial textual descriptions or simplistic metrics, inadequately modeling both intrinsic and extrinsic character dimensions. Additionally, they typically simulate character memory with implicit model knowledge or basic retrieval augment generation without explicit memory alignment, compromising memory consistency. The two issues weaken reliability of role-playing LLMs in several applications, such as trustworthy social simulation. To address these limitations, we propose PsyMem, a novel framework integrating fine-grained psychological attributes and explicit memory control for role-playing. PsyMem supplements textual descriptions with 26 psychological indicators to detailed model character. Additionally, PsyMem implements memory alignment training, explicitly trains the model to align character's response with memory, thereby enabling dynamic memory-controlled responding during inference. By training Qwen2.5-7B-Instruct on our specially designed dataset (including 5,414 characters and 38,962 dialogues extracted from novels), the resulting model, termed as PsyMem-Qwen, outperforms baseline models in role-playing, achieving the best performance in human-likeness and character fidelity.
Problem

Research questions and friction points this paper is trying to address.

Inadequate modeling of intrinsic and extrinsic character dimensions in role-playing LLMs
Lack of explicit memory alignment compromising memory consistency
Weak reliability in applications like trustworthy social simulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates 26 psychological indicators for character modeling
Implements explicit memory alignment training
Uses a custom dataset for dynamic memory-controlled responding
🔎 Similar Papers
No similar papers found.
Xilong Cheng
Xilong Cheng
Communication University of China
Role-Playing AgentsComputer visionTime Series Forecasting
Yunxiao Qin
Yunxiao Qin
中国传媒大学
Language modelmultimodal learningLLM-based agentEmbodied Learning
Y
Yuting Tan
Communication University of China
Zhengnan Li
Zhengnan Li
The Chinese University of Hong Kong, Shenzhen
Time Series ForecastingContinual Learning
Y
Ye Wang
Communication University of China, State Key Laboratory of Media Convergence and Communication
H
Hongjiang Xiao
Communication University of China, State Key Laboratory of Media Convergence and Communication
Y
Yuan Zhang
Communication University of China, State Key Laboratory of Media Convergence and Communication