🤖 AI Summary
Large language models (LLMs) harbor implicit political biases that may adversely affect downstream applications; existing analytical approaches—relying on small-scale intermediate tasks and LLM self-assessment—are prone to bias propagation. To address this, we propose a novel, goal-oriented sentiment classification (TSC)-based paradigm for quantifying political bias. We construct a cross-lingual, fine-grained political spectrum benchmark by injecting names of 1,319 politicians from diverse countries into 450 political statements. We introduce an entropy-based inconsistency metric to mitigate self-assessment bias and incorporate fictional name substitution with statistical aggregation. Our analysis reveals, for the first time: (1) systematic emotional bias toward left-wing and far-right politicians across LLMs; (2) stronger bias in Western-language models; (3) higher bias magnitude but greater cross-lingual consistency in larger models; and (4) partial mitigation of TSC unreliability via fictionalization.
📝 Abstract
Political biases encoded by LLMs might have detrimental effects on downstream applications. Existing bias analysis methods rely on small-size intermediate tasks (questionnaire answering or political content generation) and rely on the LLMs themselves for analysis, thus propagating bias. We propose a new approach leveraging the observation that LLM sentiment predictions vary with the target entity in the same sentence. We define an entropy-based inconsistency metric to encode this prediction variability. We insert 1319 demographically and politically diverse politician names in 450 political sentences and predict target-oriented sentiment using seven models in six widely spoken languages. We observe inconsistencies in all tested combinations and aggregate them in a statistically robust analysis at different granularity levels. We observe positive and negative bias toward left and far-right politicians and positive correlations between politicians with similar alignment. Bias intensity is higher for Western languages than for others. Larger models exhibit stronger and more consistent biases and reduce discrepancies between similar languages. We partially mitigate LLM unreliability in target-oriented sentiment classification (TSC) by replacing politician names with fictional but plausible counterparts.