Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a systematic amplification of partisan stereotypes in large language models (LLMs) when expressing political viewpoints—particularly when imitating party-aligned stances—resulting in significant deviations from empirical consensus. To address this, we introduce the cognitive science concept of “representativeness heuristic” into LLM alignment analysis for the first time, designing a heuristic-grounded comparative experimental framework that integrates quantitative political stance assessment, controllable prompt engineering, and human response benchmarks. Results demonstrate that LLMs exhibit stronger representativeness bias than humans, leading to ideological extremization; our interpretable prompting strategy effectively mitigates this bias, substantially improving alignment between model outputs and evidence-based political positions. This work advances theoretical understanding of the cognitive mechanisms underlying LLM political bias and proposes an interpretable, intervention-ready pathway for bias mitigation.

Technology Category

Application Category

📝 Abstract
Examining the alignment of large language models (LLMs) has become increasingly important, particularly when these systems fail to operate as intended. This study explores the challenge of aligning LLMs with human intentions and values, with specific focus on their political inclinations. Previous research has highlighted LLMs' propensity to display political leanings, and their ability to mimic certain political parties' stances on various issues. However, the extent and conditions under which LLMs deviate from empirical positions have not been thoroughly examined. To address this gap, our study systematically investigates the factors contributing to LLMs' deviations from empirical positions on political issues, aiming to quantify these deviations and identify the conditions that cause them. Drawing on cognitive science findings related to representativeness heuristics -- where individuals readily recall the representative attribute of a target group in a way that leads to exaggerated beliefs -- we scrutinize LLM responses through this heuristics lens. We conduct experiments to determine how LLMs exhibit stereotypes by inflating judgments in favor of specific political parties. Our results indicate that while LLMs can mimic certain political parties' positions, they often exaggerate these positions more than human respondents do. Notably, LLMs tend to overemphasize representativeness to a greater extent than humans. This study highlights the susceptibility of LLMs to representativeness heuristics, suggeseting potential vulnerabilities to political stereotypes. We propose prompt-based mitigation strategies that demonstrate effectiveness in reducing the influence of representativeness in LLM responses.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Political Bias
Stereotype Amplification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantifying Political Bias
Stereotype Amplification
Question Reformulation
🔎 Similar Papers
No similar papers found.