🤖 AI Summary
This work addresses the implicit value alignment of large language models (LLMs) along the democracy–authoritarianism political spectrum—an understudied dimension beyond conventional left–right or demographic bias paradigms.
Method: We propose the first dedicated democracy–authoritarianism evaluation framework, introducing FavScore—a novel preference-scoring metric—and multilingual role-based probing prompts, integrated with F-scale psychometric foundations and cross-cultural prompt engineering.
Contribution/Results: Empirical analysis reveals that LLMs exhibit an overall democratic value orientation; however, Chinese-language prompts significantly increase preference for authoritarian leaders (FavScore ↑) and frequently rank them as role models in non-political contexts—indicating deep ideological embedding. This study provides the first systematic, quantitative assessment of LLMs’ geopolitical value biases, offering a novel methodology and critical empirical evidence for AI value alignment and cross-cultural safety evaluation.
📝 Abstract
As Large Language Models (LLMs) become increasingly integrated into everyday life and information ecosystems, concerns about their implicit biases continue to persist. While prior work has primarily examined socio-demographic and left--right political dimensions, little attention has been paid to how LLMs align with broader geopolitical value systems, particularly the democracy--authoritarianism spectrum. In this paper, we propose a novel methodology to assess such alignment, combining (1) the F-scale, a psychometric tool for measuring authoritarian tendencies, (2) FavScore, a newly introduced metric for evaluating model favorability toward world leaders, and (3) role-model probing to assess which figures are cited as general role-models by LLMs. We find that LLMs generally favor democratic values and leaders, but exhibit increases favorability toward authoritarian figures when prompted in Mandarin. Further, models are found to often cite authoritarian figures as role models, even outside explicit political contexts. These results shed light on ways LLMs may reflect and potentially reinforce global political ideologies, highlighting the importance of evaluating bias beyond conventional socio-political axes. Our code is available at: https://github.com/irenestrauss/Democratic-Authoritarian-Bias-LLMs