🤖 AI Summary
This study investigates whether large language models (LLMs) rely on structured internal spatial representations or merely employ linguistic heuristics for spatial reasoning. Drawing on theories of human spatial cognition, we decompose spatial reasoning into three computational primitives—relational composition, representational transformation, and stateful updating—and design a suite of controlled tasks to probe LLMs across English, Chinese, and Arabic. Through linear probing, sparse autoencoders, and causal interventions, we systematically analyze the models’ internal mechanisms. Our findings reveal, for the first time at the level of computational primitives, that while spatial information exerts causal effects in intermediate layers, its representation is transient, fragmented, task-disjointed, and weakly integrated. Moreover, identical behavioral outputs arise from heterogeneous mechanistic pathways, and cross-lingual spatial reasoning performance exhibits significant degradation.
📝 Abstract
As spatial intelligence becomes an increasingly important capability for foundation models, it remains unclear whether large language models' (LLMs) performance on spatial reasoning benchmarks reflects structured internal spatial representations or reliance on linguistic heuristics. We address this question from a mechanistic perspective by examining how spatial information is internally represented and used. Drawing on computational theories of human spatial cognition, we decompose spatial reasoning into three primitives, relational composition, representational transformation, and stateful spatial updating, and design controlled task families for each. We evaluate multilingual LLMs in English, Chinese, and Arabic under single pass inference, and analyze internal representations using linear probing, sparse autoencoder based feature analysis, and causal interventions. We find that task relevant spatial information is encoded in intermediate layers and can causally influence behavior, but these representations are transient, fragmented across task families, and weakly integrated into final predictions. Cross linguistic analysis further reveals mechanistic degeneracy, where similar behavioral performance arises from distinct internal pathways. Overall, our results suggest that current LLMs exhibit limited and context dependent spatial representations rather than robust, general purpose spatial reasoning, highlighting the need for mechanistic evaluation beyond benchmark accuracy.