🤖 AI Summary
Existing planning benchmarks lack systematic organization, hindering cross-domain performance comparison and evidence-based adaptation to novel scenarios. Method: This paper presents the first cross-domain categorization analysis of 28 mainstream planning benchmarks, spanning five application domains—embodied environments, web navigation, scheduling, game puzzles, and daily task automation—and introduces PLANET, the first comprehensive LLM planning capability evaluation benchmark suite. We propose an “algorithm–benchmark alignment evaluation framework” and a set of multidimensional planning capability metrics to identify critical capability gaps, alongside a benchmark adaptation guideline. Contribution/Results: Our work significantly enhances standardization and comparability in planning evaluation, providing systematic support for LLM planning algorithm selection, assessment, and benchmark development. PLANET enables rigorous, domain-aware evaluation and facilitates targeted advancement of planning capabilities in foundation models.
📝 Abstract
Planning is central to agents and agentic AI. The ability to plan, e.g., creating travel itineraries within a budget, holds immense potential in both scientific and commercial contexts. Moreover, optimal plans tend to require fewer resources compared to ad-hoc methods. To date, a comprehensive understanding of existing planning benchmarks appears to be lacking. Without it, comparing planning algorithms' performance across domains or selecting suitable algorithms for new scenarios remains challenging. In this paper, we examine a range of planning benchmarks to identify commonly used testbeds for algorithm development and highlight potential gaps. These benchmarks are categorized into embodied environments, web navigation, scheduling, games and puzzles, and everyday task automation. Our study recommends the most appropriate benchmarks for various algorithms and offers insights to guide future benchmark development.