🤖 AI Summary
Existing LLM evaluation paradigms over-rely on opaque numerical metrics, failing to uncover fundamental deficiencies in spatial reasoning—particularly in understanding the physical world. Method: We propose LTD-Bench, the first benchmark centered on visual diagram generation as its core evaluation modality. It assesses bidirectional language-to-spatial mapping by requiring models to produce either rasterized dot-grid images or executable drawing code. Our methodology features multi-level spatial task design, a dual-path evaluation framework (generation and recognition), and cross-model similarity diagnostics. Contribution/Results: Experiments reveal that state-of-the-art LLMs—despite strong performance on conventional benchmarks—exhibit pervasive failures in spatial mapping. LTD-Bench exposes these deficits intuitively and interpretably, significantly enhancing evaluation transparency, diagnostic utility, and practical relevance for spatial cognition assessment.
📝 Abstract
Current evaluation paradigms for large language models (LLMs) represent a critical blind spot in AI research--relying on opaque numerical metrics that conceal fundamental limitations in spatial reasoning while providing no intuitive understanding of model capabilities. This deficiency creates a dangerous disconnect between reported performance and practical abilities, particularly for applications requiring physical world understanding. We introduce LTD-Bench, a breakthrough benchmark that transforms LLM evaluation from abstract scores to directly observable visual outputs by requiring models to generate drawings through dot matrices or executable code. This approach makes spatial reasoning limitations immediately apparent even to non-experts, bridging the fundamental gap between statistical performance and intuitive assessment. LTD-Bench implements a comprehensive methodology with complementary generation tasks (testing spatial imagination) and recognition tasks (assessing spatial perception) across three progressively challenging difficulty levels, methodically evaluating both directions of the critical language-spatial mapping. Our extensive experiments with state-of-the-art models expose an alarming capability gap: even LLMs achieving impressive results on traditional benchmarks demonstrate profound deficiencies in establishing bidirectional mappings between language and spatial concept--a fundamental limitation that undermines their potential as genuine world models. Furthermore, LTD-Bench's visual outputs enable powerful diagnostic analysis, offering a potential approach to investigate model similarity.