๐ค AI Summary
Existing benchmarks often rely on loosely coupled constraints and idealized data, failing to capture the complexity of real-world autonomous planning tasks that require extracting information from dynamic web pages and coordinating tightly coupled constraints. To address this gap, this work proposes WorldTravelโthe first planning benchmark that integrates authentic travel scenarios, multimodal web rendering, and over 15 tightly coupled temporal-logical constraints. We introduce the WorldTravel-Webscape environment, which challenges agents to perceive and reason over more than 2,000 visually rendered web pages to accomplish tasks. Evaluation reveals that even state-of-the-art models such as GPT-5.2 achieve a mere 19.33% success rate in this multimodal setting, with planning performance sharply degrading when constraint counts exceed ten, highlighting critical bottlenecks in perception-action coordination and long-horizon planning.
๐ Abstract
Real-world autonomous planning requires coordinating tightly coupled constraints where a single decision dictates the feasibility of all subsequent actions. However, existing benchmarks predominantly feature loosely coupled constraints solvable through local greedy decisions and rely on idealized data, failing to capture the complexity of extracting parameters from dynamic web environments. We introduce \textbf{WorldTravel}, a benchmark comprising 150 real-world travel scenarios across 5 cities that demand navigating an average of 15+ interdependent temporal and logical constraints. To evaluate agents in realistic deployments, we develop \textbf{WorldTravel-Webscape}, a multi-modal environment featuring over 2,000 rendered webpages where agents must perceive constraint parameters directly from visual layouts to inform their planning. Our evaluation of 10 frontier models reveals a significant performance collapse: even the state-of-the-art GPT-5.2 achieves only 32.67\% feasibility in text-only settings, which plummets to 19.33\% in multi-modal environments. We identify a critical Perception-Action Gap and a Planning Horizon threshold at approximately 10 constraints where model reasoning consistently fails, suggesting that perception and reasoning remain independent bottlenecks. These findings underscore the need for next-generation agents that unify high-fidelity visual perception with long-horizon reasoning to handle brittle real-world logistics.