Facet-Level Persona Control by Trait-Activated Routing with Contrastive SAE for Role-Playing LLMs

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of maintaining personality consistency in long-form role-playing dialogues with large language models (LLMs), where existing prompting or fine-tuning approaches suffer from limited flexibility or trait dilution. The authors propose a novel method combining contrastive learning with sparse autoencoders (Contrastive SAE) to achieve, for the first time, disentangled representations across all 30 facets of the Big Five personality model. They further introduce a trait-activation routing mechanism that dynamically injects personality control vectors into the LLM’s residual stream. Evaluated on a newly constructed, leakage-free, balanced corpus of 15,000 samples, the approach significantly enhances character consistency without compromising dialogue coherence, outperforming both CAA and prompt-only baselines, with the SAE+Prompt variant achieving the best overall performance.

Technology Category

Application Category

📝 Abstract
Personality control in Role-Playing Agents (RPAs) is commonly achieved via training-free methods that inject persona descriptions and memory through prompts or retrieval-augmented generation, or via supervised fine-tuning (SFT) on persona-specific corpora. While SFT can be effective, it requires persona-labeled data and retraining for new roles, limiting flexibility. In contrast, prompt- and RAG-based signals are easy to apply but can be diluted in long dialogues, leading to drifting and sometimes inconsistent persona behavior. To address this, we propose a contrastive Sparse AutoEncoder (SAE) framework that learns facet-level personality control vectors aligned with the Big Five 30-facet model. A new 15,000-sample leakage-controlled corpus is constructed to provide balanced supervision for each facet. The learned vectors are integrated into the model's residual space and dynamically selected by a trait-activated routing module, enabling precise and interpretable personality steering. Experiments on Large Language Models (LLMs) show that the proposed method maintains stable character fidelity and output quality across contextualized settings, outperforming Contrastive Activation Addition (CAA) and prompt-only baselines. The combined SAE+Prompt configuration achieves the best overall performance, confirming that contrastively trained latent vectors can enhance persona control while preserving dialogue coherence.
Problem

Research questions and friction points this paper is trying to address.

personality control
role-playing agents
persona consistency
facet-level traits
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive Sparse AutoEncoder
Facet-Level Persona Control
Trait-Activated Routing
Role-Playing LLMs
Big Five Personality Model