🤖 AI Summary
This paper evaluates large language models’ (LLMs) capability in reasoning about abstract geometric spatial relationships from programmatic code inputs—a task distinct from mathematical computation. Method: We introduce GeoGramBench, the first dedicated benchmark for “Program-to-Geometry” reasoning, comprising 500 synthetically generated, human-validated problems categorized into three levels of geometric structural complexity (not mathematical difficulty), spanning TikZ and Python symbolic drawing code to spatial relation inference. We formally define the task, integrating program parsing, geometric semantic alignment, hierarchical difficulty annotation, and a multi-model zero-shot/few-shot consistency evaluation framework. Contribution/Results: Experiments across 17 state-of-the-art LLMs reveal that none achieves >50% accuracy on the highest abstraction level; critically, geometric reasoning ability exhibits weak correlation with mathematical reasoning performance. GeoGramBench is publicly released and has been adopted as the standard evaluation suite for this emerging domain.
📝 Abstract
Geometric spatial reasoning forms the foundation of many applications in artificial intelligence, yet the ability of large language models (LLMs) to operate over geometric spatial information expressed in procedural code remains underexplored. In this paper, we address this gap by formalizing the Program-to-Geometry task, which challenges models to translate programmatic drawing code into accurate and abstract geometric reasoning. To evaluate this capability, we present GeoGramBench, a benchmark of 500 carefully refined problems organized by a tailored three-level taxonomy that considers geometric complexity rather than traditional mathematical reasoning complexity. Our comprehensive evaluation of 17 frontier LLMs reveals consistent and pronounced deficiencies: even the most advanced models achieve less than 50% accuracy at the highest abstraction level. These results highlight the unique challenges posed by program-driven spatial reasoning and establish GeoGramBench as a valuable resource for advancing research in symbolic-to-spatial geometric reasoning. Project page: https://github.com/LiAuto-DSR/GeoGramBench.