🤖 AI Summary
Existing LLM evaluations predominantly target short-horizon, isolated reasoning tasks, failing to reflect models’ true capabilities in realistic planning scenarios requiring long-term dependencies and structured decision-making. To address this gap, we introduce LongHorizon—the first benchmark for long-horizon planning in a high-fidelity virtual RPG environment—featuring multi-stage tasks including resource acquisition, skill learning, equipment crafting, and combat strategy. The benchmark integrates realistic environment simulation, hierarchical task design, automated plan validation, and fine-grained behavioral analysis. We systematically evaluate 25 state-of-the-art LLMs, revealing critical deficiencies in high-level strategic reasoning and low-level action coherence; performance degrades sharply with increasing task horizon and dependency depth. This work establishes a scalable, reproducible evaluation paradigm and diagnostic toolkit for advancing autonomous agents’ long-horizon planning capabilities.
📝 Abstract
Large language models (LLMs) have shown remarkable capabilities in isolated step-by-step reasoning tasks such as mathematics and programming, but their proficiency in long-horizon planning, where solutions require extended, structured sequences of interdependent actions, remains underexplored. Existing benchmarks typically assess LLMs through abstract or low-dimensional algorithmic tasks, failing to capture the complexity of realistic planning environments. We introduce HeroBench, a novel benchmark designed specifically to evaluate long-horizon planning and structured reasoning within complex RPG-inspired virtual worlds. HeroBench provides a rigorously constructed dataset of tasks covering a wide range of difficulties, a simulated environment to execute and validate agent plans, and detailed analytical tools for evaluating model performance. Tasks challenge models to formulate strategic plans, efficiently gather resources, master necessary skills, craft equipment, and defeat adversaries, reflecting practical scenarios' layered dependencies and constraints. Our extensive evaluation of 25 state-of-the-art LLMs, spanning both open-source and proprietary models, including the GPT-5 family, reveals substantial performance disparities rarely observed in conventional reasoning benchmarks. Detailed error analysis further uncovers specific weaknesses in current models' abilities to generate robust high-level plans and reliably execute structured actions. HeroBench thus not only significantly advances the evaluation of LLM reasoning but also provides a flexible, scalable foundation for future research into advanced, autonomous planning in virtual environments.