🤖 AI Summary
Current large language models lack systematic evaluation of their capabilities in long-horizon planning tasks, particularly in multi-step reasoning scenarios where their performance remains unclear. This work proposes SokoBench—the first standardized benchmark specifically designed for long-horizon planning—based on a simplified version of the Sokoban puzzle and integrated with PDDL parsing and solving tools to isolate state memory and focus exclusively on planning ability. Experimental results demonstrate that state-of-the-art reasoning models exhibit significant performance degradation on planning tasks exceeding 25 steps. Moreover, even when augmented with PDDL-based tool assistance, performance gains remain limited, revealing fundamental architectural bottlenecks that constrain their planning competence.
📝 Abstract
Although the capabilities of large language models have been increasingly tested on complex reasoning tasks, their long-horizon planning abilities have not yet been extensively investigated. In this work, we provide a systematic assessment of the planning and long-horizon reasoning capabilities of state-of-the-art Large Reasoning Models (LRMs). We propose a novel benchmark based on Sokoban puzzles, intentionally simplified to isolate long-horizon planning from state persistence. Our findings reveal a consistent degradation in planning performance when more than 25 moves are required to reach the solution, suggesting a fundamental constraint on forward planning capacity. We show that equipping LRMs with Planning Domain Definition Language (PDDL) parsing, validation, and solving tools allows for modest improvements, suggesting inherent architectural limitations which might not be overcome by test-time scaling approaches alone.