🤖 AI Summary
Current evaluations of large language models are often confined to a single cognitive dimension, lacking comprehensive benchmarks that integrate diverse human-level cognitive tasks such as spatial and linguistic reasoning. This work proposes ItinBench, a novel evaluation framework that, for the first time, unifies path optimization and language reasoning within the realistic context of travel itinerary planning, thereby enabling cross-dimensional cognitive assessment. The study systematically evaluates prominent models—including Llama 3.1 8B, Mistral Large, Gemini 1.5 Pro, and the GPT series—and reveals their limited ability to maintain both performance and consistency when handling multi-task coordination. These findings offer new perspectives and methodological support for developing reasoning benchmarks that better reflect real-world challenges.
📝 Abstract
Large language models (LLMs) with advanced cognitive capabilities are emerging as agents for various reasoning and planning tasks. Traditional evaluations often focus on specific reasoning or planning questions within controlled environments. Recent studies have explored travel planning as a medium to integrate various verbal reasoning tasks into real-world contexts. However, reasoning tasks extend beyond verbal reasoning alone, and a comprehensive evaluation of LLMs requires a testbed that incorporates tasks from multiple cognitive domains. To address this gap, we introduce ItinBench, a benchmark that features one task of spatial reasoning, i.e., route optimization, into trip itinerary planning while keeping the traditional verbal reasoning tasks. ItinBench evaluates various LLMs across diverse tasks simultaneously, including Llama 3.1 8B, Mistral Large, Gemini 1.5 Pro, and GPT family. Our findings reveal that LLMs struggle to maintain high and consistent performance when concurrently handling multiple cognitive dimensions. By incorporating tasks from distinct human-level cognitive domains, ItinBench provides new insights into building more comprehensive reasoning testbeds that better reflect real-world challenges. The code and dataset: https://ethanwtl.github.io/IBweb/