🤖 AI Summary
Existing research lacks systematic investigation into the cross-architectural, cross-scale, and cross-lingual prevalence and controllability of political bias in large language models (LLMs). This paper presents the first large-scale empirical analysis of political orientation across 14 languages using mainstream open-source LLMs. We employ the Political Compass test coupled with semantic-equivalent paraphrasing to ensure robust measurement and propose Centroid Activation Intervention (CAI), a novel method for targeted ideological alignment. Key findings are: (1) model scale exhibits a positive correlation with libertarian-left orientation; (2) significant inter-lingual and inter-architectural variation exists in political bias; and (3) CAI reliably steers model outputs toward specified ideological coordinates. Our work establishes both theoretical foundations and practical methodologies for politically aligned, controllable, and multilingual LLM governance.
📝 Abstract
Large language models (LLMs) are increasingly used in everyday tools and applications, raising concerns about their potential influence on political views. While prior research has shown that LLMs often exhibit measurable political biases--frequently skewing toward liberal or progressive positions--key gaps remain. Most existing studies evaluate only a narrow set of models and languages, leaving open questions about the generalizability of political biases across architectures, scales, and multilingual settings. Moreover, few works examine whether these biases can be actively controlled.
In this work, we address these gaps through a large-scale study of political orientation in modern open-source instruction-tuned LLMs. We evaluate seven models, including LLaMA-3.1, Qwen-3, and Aya-Expanse, across 14 languages using the Political Compass Test with 11 semantically equivalent paraphrases per statement to ensure robust measurement. Our results reveal that larger models consistently shift toward libertarian-left positions, with significant variations across languages and model families. To test the manipulability of political stances, we utilize a simple center-of-mass activation intervention technique and show that it reliably steers model responses toward alternative ideological positions across multiple languages. Our code is publicly available at https://github.com/d-gurgurov/Political-Ideologies-LLMs.