Linear Representations of Political Perspective Emerge in Large Language Models

📅 2025-03-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether large language models (LLMs) implicitly encode U.S. political ideology—specifically along the liberal–conservative spectrum—and whether such representations can be interpreted and controlled. The authors identify linearly separable ideological embeddings in the intermediate attention head activation spaces of Llama-2, Mistral, and Vicuna. Using linear probes trained on DW-NOMINATE political scores, they demonstrate accurate ideological prediction in generated text and cross-task generalization to news media bias classification. Moreover, targeted linear interventions applied to highly predictive attention heads enable real-time, controllable steering of output political orientation. These findings establish a novel paradigm for interpretable modeling, monitoring, and editing of subjective perspectives in LLMs, advancing transparency and alignment in generative AI systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated the ability to generate text that realistically reflects a range of different subjective human perspectives. This paper studies how LLMs are seemingly able to reflect more liberal versus more conservative viewpoints among other political perspectives in American politics. We show that LLMs possess linear representations of political perspectives within activation space, wherein more similar perspectives are represented closer together. To do so, we probe the attention heads across the layers of three open transformer-based LLMs ( exttt{Llama-2-7b-chat}, exttt{Mistral-7b-instruct}, exttt{Vicuna-7b}). We first prompt models to generate text from the perspectives of different U.S.~lawmakers. We then identify sets of attention heads whose activations linearly predict those lawmakers' DW-NOMINATE scores, a widely-used and validated measure of political ideology. We find that highly predictive heads are primarily located in the middle layers, often speculated to encode high-level concepts and tasks. Using probes only trained to predict lawmakers' ideology, we then show that the same probes can predict measures of news outlets' slant from the activations of models prompted to simulate text from those news outlets. These linear probes allow us to visualize, interpret, and monitor ideological stances implicitly adopted by an LLM as it generates open-ended responses. Finally, we demonstrate that by applying linear interventions to these attention heads, we can steer the model outputs toward a more liberal or conservative stance. Overall, our research suggests that LLMs possess a high-level linear representation of American political ideology and that by leveraging recent advances in mechanistic interpretability, we can identify, monitor, and steer the subjective perspective underlying generated text.
Problem

Research questions and friction points this paper is trying to address.

LLMs reflect political perspectives linearly in activation space.
Probes predict political ideology from LLM-generated text.
Linear interventions steer LLM outputs toward specific ideologies.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear probes predict political ideology from LLM activations.
Attention heads in middle layers encode political perspectives.
Linear interventions steer LLM outputs toward specific ideologies.
🔎 Similar Papers
No similar papers found.