🤖 AI Summary
This study investigates how politically biased large language models (LLMs) influence human political attitudes and decision-making, particularly examining the persuasive effects when model bias contradicts users’ preexisting ideological positions. Through two interactive experiments, participants were exposed to LLMs with liberal bias, conservative bias, or no bias (control), while political decision behaviors were systematically measured. Results demonstrate that exposure to partisan-biased LLMs significantly shifts participants’ political judgments toward the AI’s stance (p < 0.001), providing the first empirical evidence that LLM political bias can reshape political reasoning across partisan lines. A key contribution is the identification of AI literacy as a robust mitigating factor: participants with foundational AI knowledge exhibited a 37% reduction in bias adoption. The study establishes AI literacy as a critical, empirically supported intervention for countering generative AI–driven political manipulation.
📝 Abstract
As modern AI models become integral to everyday tasks, concerns about their inherent biases and their potential impact on human decision-making have emerged. While bias in models are well-documented, less is known about how these biases influence human decisions. This paper presents two interactive experiments investigating the effects of partisan bias in AI language models on political decision-making. Participants interacted freely with either a biased liberal, biased conservative, or unbiased control model while completing political decision-making tasks. We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias, regardless of their personal political partisanship. However, we also discovered that prior knowledge about AI could lessen the impact of the bias, highlighting the possible importance of AI education for robust bias mitigation. Our findings not only highlight the critical effects of interacting with biased AI and its ability to impact public discourse and political conduct, but also highlights potential techniques for mitigating these risks in the future.