🤖 AI Summary
To address the limited interpretability of spatial machine learning models, this study establishes the first integrated framework bridging eXplainable AI (XAI) and spatial statistical modeling. Methodologically, it innovatively extends Shapley values to multi-scale spatial heterogeneity attribution analysis, proposing a novel paradigm for quantifying spatial non-stationarity. The approach synergizes Shapley-based explanation, Multiscale Geographically Weighted Regression (MGWR), spatial autocorrelation analysis, and the PySAL/SHAP/GeoPandas toolchain. Empirical validation on 2020 U.S. county-level election data demonstrates that, compared to MGWR alone, the method more accurately identifies key driving variables and their spatially varying contributions. It significantly enhances model diagnostics, bias detection capability, and policy interpretability. This work provides both theoretical foundations and practical technical pathways toward trustworthy spatial intelligence.
📝 Abstract
This chapter discusses the opportunities of eXplainable Artificial Intelligence (XAI) within the realm of spatial analysis. A key objective in spatial analysis is to model spatial relationships and infer spatial processes to generate knowledge from spatial data, which has been largely based on spatial statistical methods. More recently, machine learning offers scalable and flexible approaches that complement traditional methods and has been increasingly applied in spatial data science. Despite its advantages, machine learning is often criticized for being a black box, which limits our understanding of model behavior and output. Recognizing this limitation, XAI has emerged as a pivotal field in AI that provides methods to explain the output of machine learning models to enhance transparency and understanding. These methods are crucial for model diagnosis, bias detection, and ensuring the reliability of results obtained from machine learning models. This chapter introduces key concepts and methods in XAI with a focus on Shapley value-based approaches, which is arguably the most popular XAI method, and their integration with spatial analysis. An empirical example of county-level voting behaviors in the 2020 Presidential election is presented to demonstrate the use of Shapley values and spatial analysis with a comparison to multi-scale geographically weighted regression. The chapter concludes with a discussion on the challenges and limitations of current XAI techniques and proposes new directions.