🤖 AI Summary
Large multimodal models (LMMs) rely solely on textual reasoning, limiting their effectiveness on complex compositional and planning tasks. To address this, we propose Graph-of-Thought (GoT), the first framework enabling LMMs to autonomously generate and leverage conceptual graphs—e.g., spatial relation sketches—for multi-path reasoning in a zero-shot, fully automated manner, without manual diagramming or model fine-tuning. GoT unifies textual and diagrammatic reasoning modalities, incorporates deep backward tracing and beam search for inference optimization, and integrates PDDL-based planning semantics. Evaluated on Blocksworld, GoT boosts GPT-4o’s task success rate from 35.5% to 90.2%. On the challenging Parking domain with depth-40 instances, it outperforms o1-preview by over 13%. These results demonstrate substantial improvements in LMMs’ generalization capability and interpretability for complex planning tasks.
📝 Abstract
Human reasoning relies on constructing and manipulating mental models-simplified internal representations of situations that we use to understand and solve problems. Conceptual diagrams (for example, sketches drawn by humans to aid reasoning) externalize these mental models, abstracting irrelevant details to efficiently capture relational and spatial information. In contrast, Large Language Models (LLMs) and Large Multimodal Models (LMMs) predominantly reason through textual representations, limiting their effectiveness in complex multi-step combinatorial and planning tasks. In this paper, we propose a zero-shot fully automatic framework that enables LMMs to reason through multiple chains of self-generated intermediate conceptual diagrams, significantly enhancing their combinatorial planning capabilities. Our approach does not require any human initialization beyond a natural language description of the task. It integrates both textual and diagrammatic reasoning within an optimized graph-of-thought inference framework, enhanced by beam search and depth-wise backtracking. Evaluated on multiple challenging PDDL planning domains, our method substantially improves GPT-4o's performance (for example, from 35.5% to 90.2% in Blocksworld). On more difficult planning domains with solution depths up to 40, our approach outperforms even the o1-preview reasoning model (for example, over 13% improvement in Parking). These results highlight the value of conceptual diagrams as a complementary reasoning medium in LMMs.