🤖 AI Summary
This study addresses a fundamental limitation in large language models’ (LLMs) deep geometric relational understanding—beyond mere answer accuracy. To overcome the inadequacy of conventional end-result–oriented evaluation, we propose the first *disentangled geometric understanding evaluation paradigm*, introducing GeomRel: a benchmark dataset featuring fine-grained, structure-aware geometric relation annotations. We further design Geometry Chain-of-Thought (GeoCoT), a novel prompting framework that explicitly models spatial relational reasoning paths. Through systematic multi-model evaluation on GeomRel, we uncover, for the first time, widespread geometric relation misclassification across mainstream LLMs—including GPT-4 and Claude. GeoCoT significantly improves relational identification accuracy, yielding an average gain of 27.6%. Our work establishes a new methodological foundation for rigorous assessment and enhancement of geometric reasoning capabilities in LLMs.
📝 Abstract
Geometric ability is a significant challenge for large language models (LLMs) due to the need for advanced spatial comprehension and abstract thinking. Existing datasets primarily evaluate LLMs on their final answers, but they cannot truly measure their true understanding of geometric structures, as LLMs can arrive at correct answers by coincidence. To fill this gap, we introduce the GeomRel dataset, designed to evaluate LLMs' understanding of geometric structures by isolating the core step of geometric relationship identification in problem-solving. Using this benchmark, we conduct thorough evaluations of diverse LLMs and identify key limitations in understanding geometric structures. We further propose the Geometry Chain-of-Thought (GeoCoT) method, which enhances LLMs' ability to identify geometric relationships, resulting in significant performance improvements.