🤖 AI Summary
This work addresses the challenge of evaluating large language models’ (LLMs) autonomous, long-horizon reasoning and exploration capabilities—particularly in tool-free, open-ended settings. To this end, we introduce TextQuests, the first benchmark grounded in classic Infocom text-adventure games, requiring LLM agents to complete multi-hour, hundreds-of-step tasks within a purely textual, closed-world environment. The benchmark rigorously assesses intrinsic long-context reasoning, state tracking, hierarchical action planning, and trial-and-error learning. Methodologically, we employ zero-shot prompting augmented with explicit contextual memory mechanisms to drive LLM-based agents. Comprehensive evaluation across diverse state-of-the-art models reveals fundamental limitations—including poor memory coherence and policy drift—under prolonged task execution. These findings establish TextQuests as a high-fidelity, high-difficulty evaluation standard for autonomous LLM agents, offering novel insights into the boundaries of current reasoning and planning capabilities.
📝 Abstract
Evaluating AI agents within complex, interactive environments that mirror real-world challenges is critical for understanding their practical capabilities. While existing agent benchmarks effectively assess skills like tool use or performance on structured tasks, they often do not fully capture an agent's ability to operate autonomously in exploratory environments that demand sustained, self-directed reasoning over a long and growing context. To spur the development of agents capable of more robust intrinsic reasoning over long horizons, we introduce TextQuests, a benchmark based on the Infocom suite of interactive fiction games. These text-based adventures, which can take human players over 30 hours and require hundreds of precise actions to solve, serve as an effective proxy for evaluating AI agents on focused, stateful tasks. The benchmark is specifically designed to assess an LLM agent's capacity for self-contained problem-solving by precluding the use of external tools, thereby focusing on intrinsic long-context reasoning capabilities in an exploratory environment characterized by the need for trial-and-error learning and sustained problem-solving within a single interactive session. We release TextQuests at https://textquests.ai.