A Systematic Analysis of Biases in Large Language Models

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit latent biases across political, ideological, geopolitical alliance, linguistic, and gender dimensions—yet systematic, cross-dimensional evaluation remains lacking. Method: We introduce the first unified, quantitative framework for multidimensional bias assessment, comprising a reproducible multi-task benchmark: news summarization, stance classification, UN voting simulation, multilingual story completion, and value-aligned response generation. Our methodology employs prompt-engineering–driven controlled evaluation, multilingual zero-shot transfer testing, and structured metrics—including Alliance Bias Index and Gender Response Entropy Difference. Contribution/Results: All evaluated models (GPT-4, Claude, Llama, Gemini) display significant heterogeneous biases: average political neutrality deviation of 37%, 42% performance degradation on non-English tasks due to English-centric training, and systematic stereotypical gender associations in responses. This work establishes both a methodological foundation and an empirical benchmark for rigorous fairness assessment of LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have rapidly become indispensable tools for acquiring information and supporting human decision-making. However, ensuring that these models uphold fairness across varied contexts is critical to their safe and responsible deployment. In this study, we undertake a comprehensive examination of four widely adopted LLMs, probing their underlying biases and inclinations across the dimensions of politics, ideology, alliance, language, and gender. Through a series of carefully designed experiments, we investigate their political neutrality using news summarization, ideological biases through news stance classification, tendencies toward specific geopolitical alliances via United Nations voting patterns, language bias in the context of multilingual story completion, and gender-related affinities as revealed by responses to the World Values Survey. Results indicate that while the LLMs are aligned to be neutral and impartial, they still show biases and affinities of different types.
Problem

Research questions and friction points this paper is trying to address.

Analyzes biases in large language models across multiple dimensions
Investigates political, ideological, and gender-related biases in LLMs
Examines biases in language, alliance, and neutrality through experiments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing biases in LLMs across multiple dimensions
Using news summarization to assess political neutrality
Evaluating language bias through multilingual story completion