Localizing Persona Representations in LLMs

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how human-like personae—encompassing values, ethics, and political orientations—are encoded in the representational space of large language models (LLMs). Methodologically, we employ t-SNE and PCA for dimensionality reduction, quantify inter-layer activation differences, conduct cross-model comparisons, and analyze semantic similarity. Our key contribution is the first systematic identification of a spatial locality principle: persona representations are highly localized in the final one-third of decoder layers. Furthermore, ethical stances exhibit polysemous, overlapping representations, whereas political orientations manifest significant spatial separation. These findings are empirically validated across multiple mainstream decoder-only LLMs. The results provide critical neuro-symbolic evidence for controllable persona injection, value alignment, and model interpretability, establishing a precise, operationally actionable anatomical basis for intervening in persona-related computations within transformer decoders.

Technology Category

Application Category

📝 Abstract
We present a study on how and where personas -- defined by distinct sets of human characteristics, values, and beliefs -- are encoded in the representation space of large language models (LLMs). Using a range of dimension reduction and pattern recognition methods, we first identify the model layers that show the greatest divergence in encoding these representations. We then analyze the activations within a selected layer to examine how specific personas are encoded relative to others, including their shared and distinct embedding spaces. We find that, across multiple pre-trained decoder-only LLMs, the analyzed personas show large differences in representation space only within the final third of the decoder layers. We observe overlapping activations for specific ethical perspectives -- such as moral nihilism and utilitarianism -- suggesting a degree of polysemy. In contrast, political ideologies like conservatism and liberalism appear to be represented in more distinct regions. These findings help to improve our understanding of how LLMs internally represent information and can inform future efforts in refining the modulation of specific human traits in LLM outputs. Warning: This paper includes potentially offensive sample statements.
Problem

Research questions and friction points this paper is trying to address.

Identify layers encoding persona representations in LLMs
Analyze shared and distinct embedding spaces of personas
Examine representation differences in ethical and political perspectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Identify persona encoding layers using dimension reduction
Analyze activations for shared and distinct embedding spaces
Observe overlapping ethical perspectives in final layers
C
Celia Cintas
IBM Research Africa, Nairobi, Kenya
Miriam Rateike
Miriam Rateike
IBM Research Africa, Saarland University
Artificial IntelligenceMachine LearningFairnessCausalityProbabilistic Learning
Erik Miehling
Erik Miehling
IBM Research
controlreinforcement learninggame theoryartificial intelligence
E
Elizabeth Daly
IBM Research Europe, Dublin, Ireland
S
Skyler Speakman
IBM Research Africa, Nairobi, Kenya