🤖 AI Summary
This work systematically evaluates the dynamic adaptability of large language models (LLMs) to real-world travel disruptions—such as flight cancellations, adverse weather, or venue overbooking. To this end, we introduce the first adaptive itinerary planning benchmark specifically designed for travel interruption scenarios, incorporating dual dimensions: disruption severity and traveler tolerance. We propose three automated evaluation metrics—Intent Preservation, Responsiveness, and Adaptability—and integrate LLM-as-a-judge scoring, expert human evaluation, and semantic/spatial/sequential consistency analysis for multi-dimensional, triple-validated assessment. Experimental results reveal that while LLMs maintain strong geographical coherence in long itineraries, their robustness degrades significantly with increasing itinerary length, exposing a critical fragility under perturbation. This study establishes the first resilience evaluation framework for travel itinerary planning under disruption, offering a novel paradigm and a reproducible benchmark for reliability research in practical LLM deployment.
📝 Abstract
Recent efforts like TripCraft and TravelPlanner have advanced the use of Large Language Models ( LLMs) for personalized, constraint aware travel itinerary generation. Yet, real travel often faces disruptions. To address this, we present TripTide, the first benchmark evaluating LLM's ability to revise itineraries under realistic disruptions. TripTide models key dimensions such as disruption severity and traveler tolerance, enabling nuanced assessment of LLM adaptability to events like flight cancellations, weather closures, or overbooked attractions. We conduct a threefold evaluation. First, we introduce automatic metrics including Preservation of Intent (how well the revised plan maintains feasibility and goals), Responsiveness (promptness and appropriateness of disruption handling), and Adaptability (semantic, spatial, and sequential divergence between original and revised plans). Second, we apply an LLM-as-a-judge approach to automatically assess revision quality. Third, we perform manual expert evaluation to verify whether revisions preserve semantic, spatial, sequential, and responsive aspects. Our experiments show that LLMs maintain strong sequential consistency and semantic stability, while spatial deviations are larger for shorter trips but decrease with longer ones, indicating that extended plans encourage better geographic coherence. However, disruption-handling ability declines as plan length increases, highlighting limits in LLM robustness. TripTide establishes a benchmark for evaluating adaptability, personalization, and resilience in LLM-based travel planning under real-world uncertainty.