PLANET: A Collection of Benchmarks for Evaluating LLMs' Planning Capabilities

📅 2025-04-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing planning benchmarks lack systematic organization, hindering cross-domain performance comparison and evidence-based adaptation to novel scenarios. Method: This paper presents the first cross-domain categorization analysis of 28 mainstream planning benchmarks, spanning five application domains—embodied environments, web navigation, scheduling, game puzzles, and daily task automation—and introduces PLANET, the first comprehensive LLM planning capability evaluation benchmark suite. We propose an “algorithm–benchmark alignment evaluation framework” and a set of multidimensional planning capability metrics to identify critical capability gaps, alongside a benchmark adaptation guideline. Contribution/Results: Our work significantly enhances standardization and comparability in planning evaluation, providing systematic support for LLM planning algorithm selection, assessment, and benchmark development. PLANET enables rigorous, domain-aware evaluation and facilitates targeted advancement of planning capabilities in foundation models.

Technology Category

Application Category

📝 Abstract
Planning is central to agents and agentic AI. The ability to plan, e.g., creating travel itineraries within a budget, holds immense potential in both scientific and commercial contexts. Moreover, optimal plans tend to require fewer resources compared to ad-hoc methods. To date, a comprehensive understanding of existing planning benchmarks appears to be lacking. Without it, comparing planning algorithms' performance across domains or selecting suitable algorithms for new scenarios remains challenging. In this paper, we examine a range of planning benchmarks to identify commonly used testbeds for algorithm development and highlight potential gaps. These benchmarks are categorized into embodied environments, web navigation, scheduling, games and puzzles, and everyday task automation. Our study recommends the most appropriate benchmarks for various algorithms and offers insights to guide future benchmark development.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' planning capabilities across diverse benchmarks
Identifying gaps in existing planning benchmarks for algorithms
Recommending suitable benchmarks for different planning algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates LLMs' planning with diverse benchmarks
Categorizes benchmarks into five key domains
Recommends benchmarks for different algorithms
🔎 Similar Papers
No similar papers found.