π€ AI Summary
Existing evaluation benchmarks struggle to effectively measure the precise compositional spatial reasoning capabilities of multimodal large language models due to overly simplistic tasks, reliance on semantic similarity, and insufficiently rigorous evaluation criteria. To address this gap, this work proposes a geometry-driven benchmark inspired by the Tangram puzzle, introducing the Tangram Construction Expression (TCE)βa novel symbolic geometric representation framework that enables machine-verifiable, exact coordinate-based descriptions. The benchmark features two tasks: contour prediction and end-to-end inverse geometric assembly code generation. Experimental results reveal that prevailing multimodal large models predominantly rely on contour matching while neglecting underlying geometric constraints, often producing distorted component shapesβa clear indication of their significant deficiency in preserving geometric integrity.
π Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in visual recognition and semantic understanding. Nevertheless, their ability to perform precise compositional spatial reasoning remains largely unexplored. Existing benchmarks often involve relatively simple tasks and rely on semantic approximations or coarse relative positioning, while their evaluation metrics are typically limited and lack rigorous mathematical formulations. To bridge this gap, we introduce TangramPuzzle, a geometry-grounded benchmark designed to evaluate compositional spatial reasoning through the lens of the classic Tangram game. We propose the Tangram Construction Expression (TCE), a symbolic geometric framework that grounds tangram assemblies in exact, machine-verifiable coordinate specifications, to mitigate the ambiguity of visual approximation. We design two complementary tasks: Outline Prediction, which demands inferring global shapes from local components, and End-to-End Code Generation, which requires solving inverse geometric assembly problems. We conduct extensive evaluation experiments on advanced open-source and proprietary models, revealing an interesting insight: MLLMs tend to prioritize matching the target silhouette while neglecting geometric constraints, leading to distortions or deformations of the pieces.