🤖 AI Summary
This study identifies systematic implicit geographic bias in large language models (LLMs), wherein inference performance on entities from the Global North/West significantly exceeds that on entities from the Global South/East. Method: We introduce the “Twenty Questions” paradigm to construct Geo20Q+, a multilingual, multi-regional entity dataset enabling active querying and open-ended, multi-turn interaction—thereby overcoming limitations of static prompt-based evaluation. We evaluate seven mainstream LLMs across seven languages and conduct correlation analyses with Wikipedia pageviews and pretraining corpus frequencies. Contribution/Results: Geographic bias is pervasive and cannot be fully attributed to linguistic or data-distribution differences, revealing structural imbalances in LLM knowledge representation. Geo20Q+ establishes a novel evaluation paradigm and benchmark resource for assessing fairness and geographic equity in LLMs.
📝 Abstract
Large Language Models (LLMs) have been extensively tuned to mitigate explicit biases, yet they often exhibit subtle implicit biases rooted in their pre-training data. Rather than directly probing LLMs with human-crafted questions that may trigger guardrails, we propose studying how models behave when they proactively ask questions themselves. The 20 Questions game, a multi-turn deduction task, serves as an ideal testbed for this purpose. We systematically evaluate geographic performance disparities in entity deduction using a new dataset, Geo20Q+, consisting of both notable people and culturally significant objects (e.g., foods, landmarks, animals) from diverse regions. We test popular LLMs across two gameplay configurations (canonical 20-question and unlimited turns) and in seven languages (English, Hindi, Mandarin, Japanese, French, Spanish, and Turkish). Our results reveal geographic disparities: LLMs are substantially more successful at deducing entities from the Global North than the Global South, and the Global West than the Global East. While Wikipedia pageviews and pre-training corpus frequency correlate mildly with performance, they fail to fully explain these disparities. Notably, the language in which the game is played has minimal impact on performance gaps. These findings demonstrate the value of creative, free-form evaluation frameworks for uncovering subtle biases in LLMs that remain hidden in standard prompting setups. By analyzing how models initiate and pursue reasoning goals over multiple turns, we find geographic and cultural disparities embedded in their reasoning processes. We release the dataset (Geo20Q+) and code at https://sites.google.com/view/llmbias20q/home.