🤖 AI Summary
Large language model (LLM) agents face fundamental limitations in embodied deployment due to weak spatial mental modeling—hindering long-horizon spatial reasoning, dynamic state tracking, and active exploration under partial observability.
Method: We introduce RubikBench, the first generative benchmark centered on the Rubik’s Cube, coupled with a three-tier progressive diagnostic framework that systematically isolates and quantifies three distinct spatial cognition bottlenecks. We further propose an external-solver-augmented causal attribution analysis to uncover root causes of LLM failure in long-horizon planning. Our methodology integrates symbolic state representation, vision-action co-simulation, and tool-augmented interaction protocols.
Contribution/Results: Empirical evaluation across mainstream LLMs reveals a 0% success rate on long-horizon Rubik’s Cube tasks, precisely localizing critical breakdowns in spatial mental models. RubikBench establishes the first reproducible, verifiable cognitive assessment standard for embodied intelligence, enabling rigorous diagnosis of spatial reasoning deficits.
📝 Abstract
Large Language Model (LLM) agents, while proficient in the digital realm, face a significant gap in physical-world deployment due to the challenge of forming and maintaining a robust spatial mental model. We identify three core cognitive challenges hindering this transition: spatial reasoning, long-horizon state tracking via mental simulation, and active exploration under partial observation. To isolate and evaluate these faculties, we introduce CubeBench, a novel generative benchmark centered on the Rubik's Cube. CubeBench uses a three-tiered diagnostic framework that progressively assesses agent capabilities, from foundational state tracking with full symbolic information to active exploration with only partial visual data. Our experiments on leading LLMs reveal critical limitations, including a uniform 0.00% pass rate on all long-horizon tasks, exposing a fundamental failure in long-term planning. We also propose a diagnostic framework to isolate these cognitive bottlenecks by providing external solver tools. By analyzing the failure modes, we provide key insights to guide the development of more physically-grounded intelligent agents.