🤖 AI Summary
This study investigates factual inaccuracies and cultural/ideological biases in large language models (LLMs) across multilingual and geopolitical contexts, and their potential influence on public narratives. We propose a two-stage evaluation framework: Stage 1 quantifies query-language-induced factual bias via cross-lingual consistency analysis; Stage 2 examines the interplay between model training background and query language in shaping responses to controversial topics, using human-annotated neutral and sensitive question-answer pairs across four languages. Our work is the first to systematically disentangle inherent training biases from language-triggered inference biases, establishing a scalable, multilingual evaluation paradigm for sensitive issues. Experimental results demonstrate that query language significantly distorts factual outputs (Stage 1), while responses to contentious topics are jointly determined by training provenance and input language (Stage 2), empirically validating a dual-bias mechanism.
📝 Abstract
As large language models (LLMs) are increasingly deployed across diverse linguistic and cultural contexts, understanding their behavior in both factual and disputable scenarios is essential, especially when their outputs may shape public opinion or reinforce dominant narratives. In this paper, we define two types of bias in LLMs: model bias (bias stemming from model training) and inference bias (bias induced by the language of the query), through a two-phase evaluation. Phase 1 evaluates LLMs on factual questions where a single verifiable answer exists, assessing whether models maintain consistency across different query languages. Phase 2 expands the scope by probing geopolitically sensitive disputes, where responses may reflect culturally embedded or ideologically aligned perspectives. We construct a manually curated dataset spanning both factual and disputable QA, across four languages and question types. The results show that Phase 1 exhibits query language induced alignment, while Phase 2 reflects an interplay between the model's training context and query language. This paper offers a structured framework for evaluating LLM behavior across neutral and sensitive topics, providing insights for future LLM deployment and culturally aware evaluation practices in multilingual contexts.