Beyond Profile: From Surface-Level Facts to Deep Persona Simulation in LLMs

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of large language models (LLMs) in character simulation—namely, their tendency to reproduce only surface-level facts and dialogues while failing to capture deep cognitive patterns and idiosyncratic linguistic styles. We propose “Deep Personality Simulation,” a novel paradigm instantiated using Lu Xun as a canonical case study. Leveraging his 17 essay collections, we design four structured tasks spanning linguistic structure modeling and ideological internalization. Our approach combines pretraining with three-stage fine-tuning (multiple-choice QA, generative QA, and style transfer) and introduces CharLoRA—a novel parameter-efficient mechanism that jointly optimizes a general linguistic style expert and multiple task-specific ideological understanding experts. Experiments demonstrate significant improvements over baselines in linguistic accuracy, viewpoint comprehension, and stylistic consistency, validating both the efficacy and scalability of deep ideological modeling for personality simulation.

Technology Category

Application Category

📝 Abstract
Previous approaches to persona simulation large language models (LLMs) have typically relied on learning basic biographical information, or using limited role-play dialogue datasets to capture a character's responses. However, a holistic representation of an individual goes beyond surface-level facts or conversations to deeper thoughts and thinking. In this work, we introduce CharacterBot, a model designed to replicate both the linguistic patterns and distinctive thought processes of a character. Using Lu Xun, a renowned Chinese writer, as a case study, we propose four training tasks derived from his 17 essay collections. These include a pre-training task focused on mastering external linguistic structures and knowledge, as well as three fine-tuning tasks: multiple-choice question answering, generative question answering, and style transfer, each aligning the LLM with Lu Xun's internal ideation and writing style. To optimize learning across these tasks, we introduce a CharLoRA parameter updating mechanism, where a general linguistic style expert collaborates with other task-specific experts to better study both the language style and the understanding of deeper thoughts. We evaluate CharacterBot on three tasks for linguistic accuracy and opinion comprehension, demonstrating that it significantly outperforms the baselines on our adapted metrics. We hope that this work inspires future research on deep character persona simulation LLM.
Problem

Research questions and friction points this paper is trying to address.

Enhance persona simulation in large language models.
Replicate linguistic patterns and thought processes.
Train model with specific tasks for deep character understanding.
Innovation

Methods, ideas, or system contributions that make the work stand out.

CharacterBot model
CharLoRA mechanism
Lu Xun case study
🔎 Similar Papers
No similar papers found.