🤖 AI Summary
This study investigates how human-like personae—encompassing values, ethics, and political orientations—are encoded in the representational space of large language models (LLMs). Methodologically, we employ t-SNE and PCA for dimensionality reduction, quantify inter-layer activation differences, conduct cross-model comparisons, and analyze semantic similarity. Our key contribution is the first systematic identification of a spatial locality principle: persona representations are highly localized in the final one-third of decoder layers. Furthermore, ethical stances exhibit polysemous, overlapping representations, whereas political orientations manifest significant spatial separation. These findings are empirically validated across multiple mainstream decoder-only LLMs. The results provide critical neuro-symbolic evidence for controllable persona injection, value alignment, and model interpretability, establishing a precise, operationally actionable anatomical basis for intervening in persona-related computations within transformer decoders.
📝 Abstract
We present a study on how and where personas -- defined by distinct sets of human characteristics, values, and beliefs -- are encoded in the representation space of large language models (LLMs). Using a range of dimension reduction and pattern recognition methods, we first identify the model layers that show the greatest divergence in encoding these representations. We then analyze the activations within a selected layer to examine how specific personas are encoded relative to others, including their shared and distinct embedding spaces. We find that, across multiple pre-trained decoder-only LLMs, the analyzed personas show large differences in representation space only within the final third of the decoder layers. We observe overlapping activations for specific ethical perspectives -- such as moral nihilism and utilitarianism -- suggesting a degree of polysemy. In contrast, political ideologies like conservatism and liberalism appear to be represented in more distinct regions. These findings help to improve our understanding of how LLMs internally represent information and can inform future efforts in refining the modulation of specific human traits in LLM outputs. Warning: This paper includes potentially offensive sample statements.