🤖 AI Summary
This paper addresses the limited capability of multimodal large language models (MLLMs) in complex spatial reasoning—particularly multi-step origami folding under strict geometric constraints—by introducing OrigamiBench, the first benchmark dedicated to origami tasks. Methodologically, it proposes a comprehensive evaluation framework integrating geometric constraint modeling and multi-step folding reasoning, featuring formal origami diagram encoding, crease pattern compilation, folding process tracking, and image generation, alongside an interactive environment supporting reinforcement learning training. Key contributions include: (1) releasing a high-quality dataset of 350 origami instances; (2) systematically exposing critical deficiencies of state-of-the-art MLLMs in pattern prediction, spatial relation inference, and end-to-end crease pattern (CP) code generation; and (3) establishing a reproducible, extensible benchmark and evaluation paradigm for multimodal spatial reasoning.
📝 Abstract
Spatial reasoning is a key capability in the field of artificial intelligence, especially crucial in areas such as robotics, computer vision, and natural language understanding. However, evaluating the ability of multimodal large language models(MLLMs) in complex spatial reasoning still faces challenges, particularly in scenarios requiring multi-step reasoning and precise mathematical constraints. This paper introduces ORIGAMISPACE, a new dataset and benchmark designed to evaluate the multi-step spatial reasoning ability and the capacity to handle mathematical constraints of MLLMs through origami tasks. The dataset contains 350 data instances,each comprising a strictly formatted crease pattern (CP diagram), the Compiled Flat Pattern, the complete Folding Process, and the final Folded Shape Image. We propose four evaluation tasks: Pattern Prediction, Multi-step Spatial Reasoning, Spatial Relationship Prediction, and End-to-End CP Code Generation. For the CP code generation task, we design an interactive environment and explore the possibility of using reinforcement learning methods to train MLLMs. Through experiments on existing MLLMs, we initially reveal the strengths and weaknesses of these models in handling complex spatial reasoning tasks.