Profile-LLM: Dynamic Profile Optimization for Realistic Personality Expression in LLMs

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing research lacks systematic prompt optimization to maximize personality expression in large language models (LLMs). Method: We propose an iterative prompt optimization framework that leverages the LLM’s intrinsic personality knowledge and employs contextualized response evaluation as a quantifiable scoring mechanism, enabling authentic and controllable personality modeling. Our approach integrates prompt engineering, dynamic optimization algorithms, and benchmark-based contextual response assessment, while analyzing the relationship between model scale and personality modeling capability. Contribution/Results: Experiments demonstrate that our optimized prompts significantly outperform psychology-driven baseline prompts in both authenticity and expressiveness of personality representation. Moreover, by tuning the number of optimization steps, we achieve fine-grained control over the intensity of specific personality traits. This work establishes a novel paradigm for controllable personality generation and provides empirical evidence supporting its efficacy.

Technology Category

Application Category

📝 Abstract
Personalized Large Language Models (LLMs) have been shown to be an effective way to create more engaging and enjoyable user-AI interactions. While previous studies have explored using prompts to elicit specific personality traits in LLMs, they have not optimized these prompts to maximize personality expression. To address this limitation, we propose PersonaPulse: Dynamic Profile Optimization for Realistic Personality Expression in LLMs, a framework that leverages LLMs' inherent knowledge of personality traits to iteratively enhance role-play prompts while integrating a situational response benchmark as a scoring tool, ensuring a more realistic and contextually grounded evaluation to guide the optimization process. Quantitative evaluations demonstrate that the prompts generated by PersonaPulse outperform those of prior work, which were designed based on personality descriptions from psychological studies. Additionally, we explore the relationship between model size and personality modeling through extensive experiments. Finally, we find that, for certain personality traits, the extent of personality evocation can be partially controlled by pausing the optimization process. These findings underscore the importance of prompt optimization in shaping personality expression within LLMs, offering valuable insights for future research on adaptive AI interactions.
Problem

Research questions and friction points this paper is trying to address.

Optimizing prompts to maximize personality expression in LLMs
Creating realistic personality evaluation through situational response benchmarks
Exploring model size impact and controllable personality trait evocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages LLMs' knowledge to enhance role-play prompts
Integrates situational benchmark as scoring tool
Iteratively optimizes prompts for realistic personality expression
🔎 Similar Papers
No similar papers found.
S
Shi-Wei Dai
Academia Sinica, Taipei, Taiwan
Y
Yan-Wei Shie
Academia Sinica, Taipei, Taiwan
T
Tsung-Huan Yang
Academia Sinica, Taipei, Taiwan
Lun-Wei Ku
Lun-Wei Ku
Research Fellow, Academia Sinica
Sentiment Analysis and Opinion MiningNatural Language ProcessingText MiningInformation RetrievalComputational Linguistic
Y
Yung-Hui Li
AI Research Center, Hon Hai Research Institute, Taipei, Taiwan