TopoBench: Benchmarking LLMs on Hard Topological Reasoning

📅 2026-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the poor performance of large language models in spatial reasoning tasks involving global topological properties—such as connectivity, loop closure, and regional symmetry—in grid-based puzzles. To systematically evaluate and analyze this limitation, the authors introduce TopoBench, the first standardized benchmark dedicated to hard topological reasoning, encompassing six puzzle types across three difficulty levels, along with a fine-grained error taxonomy. Through chain-of-thought annotations, intervention simulations, and tool-augmented constraint verification, the work diagnoses the root causes of model failures. Experimental results reveal that state-of-the-art models solve fewer than 25% of challenging instances, with the primary bottleneck lying in extracting valid constraints from spatial representations rather than in subsequent deductive reasoning.

Technology Category

Application Category

📝 Abstract
Solving topological grid puzzles requires reasoning over global spatial invariants such as connectivity, loop closure, and region symmetry and remains challenging for even the most powerful large language models (LLMs). To study these abilities under controlled settings, we introduce TopoBench, a benchmark of six puzzle families across three difficulty levels. We evaluate strong reasoning LLMs on TopoBench and find that even frontier models solve fewer than one quarter of hard instances, with two families nearly unsolved. To investigate whether these failures stem from reasoning limitations or from difficulty extracting and maintaining spatial constraints, we annotate 750 chain of thought traces with an error taxonomy that surfaces four candidate causal failure modes, then test them with targeted interventions simulating each error type. These interventions show that certain error patterns like premature commitment and constraint forgetting have a direct impact on the ability to solve the puzzle, while repeated reasoning is a benign effect of search. Finally we study mitigation strategies including prompt guidance, cell-aligned grid representations and tool-based constraint checking, finding that the bottleneck lies in extracting constraints from spatial representations and not in reasoning over them. Code and data are available at github.com/mayug/topobench-benchmark.
Problem

Research questions and friction points this paper is trying to address.

topological reasoning
large language models
spatial constraints
grid puzzles
reasoning limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

topological reasoning
benchmark
chain-of-thought analysis
constraint extraction
failure mode diagnosis