GeoBS: Information-Theoretic Quantification of Geographic Bias in AI Models

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing geographic bias evaluation methods lack a model-agnostic, spatially explicit, and universally applicable framework, hindering fair cross-model comparisons and spatial attribution analysis. Method: We propose GeoBS—the first model-agnostic, spatially explicit geographic bias assessment framework—grounded in information theory to jointly characterize three fundamental spatial properties: multi-scale structure, distance decay, and anisotropy. GeoBS introduces three novel geographic bias scores and requires no modification to underlying AI models or task pipelines. Results: Extensive experiments across three task types, eight datasets, and eight models—including general-purpose large language models and specialized geospatial AI systems—demonstrate pervasive geographic bias in all evaluated models. GeoBS enables rigorous, quantitative cross-model comparison and precisely identifies dominant spatial factors driving bias (e.g., regional underrepresentation or directional distortion), validating its universality and diagnostic capability.

Technology Category

Application Category

📝 Abstract
The widespread adoption of AI models, especially foundation models (FMs), has made a profound impact on numerous domains. However, it also raises significant ethical concerns, including bias issues. Although numerous efforts have been made to quantify and mitigate social bias in AI models, geographic bias (in short, geo-bias) receives much less attention, which presents unique challenges. While previous work has explored ways to quantify geo-bias, these measures are model-specific (e.g., mean absolute deviation of LLM ratings) or spatially implicit (e.g., average fairness scores of all spatial partitions). We lack a model-agnostic, universally applicable, and spatially explicit geo-bias evaluation framework that allows researchers to fairly compare the geo-bias of different AI models and to understand what spatial factors contribute to the geo-bias. In this paper, we establish an information-theoretic framework for geo-bias evaluation, called GeoBS (Geo-Bias Scores). We demonstrate the generalizability of the proposed framework by showing how to interpret and analyze existing geo-bias measures under this framework. Then, we propose three novel geo-bias scores that explicitly take intricate spatial factors (multi-scalability, distance decay, and anisotropy) into consideration. Finally, we conduct extensive experiments on 3 tasks, 8 datasets, and 8 models to demonstrate that both task-specific GeoAI models and general-purpose foundation models may suffer from various types of geo-bias. This framework will not only advance the technical understanding of geographic bias but will also establish a foundation for integrating spatial fairness into the design, deployment, and evaluation of AI systems.
Problem

Research questions and friction points this paper is trying to address.

Quantifying geographic bias in AI models universally
Developing model-agnostic spatially explicit bias evaluation framework
Understanding spatial factors contributing to AI geographic bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Information-theoretic framework for geo-bias evaluation
Novel scores incorporating spatial factors explicitly
Model-agnostic approach applicable across diverse AI systems
🔎 Similar Papers
No similar papers found.