🤖 AI Summary
This study investigates explicit and implicit political biases in large language models (LLMs) and their societal implications. Method: We employ the Political Compass test, role-based prompting, and multilingual comparative evaluation, integrated with cross-dimensional sociodemographic analysis across eight mainstream LLMs. Contribution/Results: We empirically demonstrate that LLMs exhibit a pervasive leftward political orientation; implicit bias is stronger than explicit bias yet highly correlated with it; implicit bias intensifies in multilingual settings; and most models display “intra-model bias alignment”—i.e., consistency between self-reported stances and behavioral outputs. Crucially, this work establishes the first interpretable pathway for political bias in LLMs and reveals its cross-lingual stability. It provides a reproducible methodological framework and empirical benchmark for bias detection, model alignment, and AI governance in democratic contexts.
📝 Abstract
Large Language Models (LLMs) are increas- ingly integral to information dissemination and decision-making processes. Given their grow- ing societal influence, understanding potential biases, particularly within the political domain, is crucial to prevent undue influence on public opinion and democratic processes. This work investigates political bias and stereotype propa- gation across eight prominent LLMs using the two-dimensional Political Compass Test (PCT). Initially, the PCT is employed to assess the in- herent political leanings of these models. Sub- sequently, persona prompting with the PCT is used to explore explicit stereotypes across vari- ous social dimensions. In a final step, implicit stereotypes are uncovered by evaluating mod- els with multilingual versions of the PCT. Key findings reveal a consistent left-leaning polit- ical alignment across all investigated models. Furthermore, while the nature and extent of stereotypes vary considerably between models, implicit stereotypes elicited through language variation are more pronounced than those iden- tified via explicit persona prompting. Interest- ingly, for most models, implicit and explicit stereotypes show a notable alignment, suggest- ing a degree of transparency or "awareness" regarding their inherent biases. This study un- derscores the complex interplay of political bias and stereotypes in LLMs.