TripTide: A Benchmark for Adaptive Travel Planning under Disruptions

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically evaluates the dynamic adaptability of large language models (LLMs) to real-world travel disruptions—such as flight cancellations, adverse weather, or venue overbooking. To this end, we introduce the first adaptive itinerary planning benchmark specifically designed for travel interruption scenarios, incorporating dual dimensions: disruption severity and traveler tolerance. We propose three automated evaluation metrics—Intent Preservation, Responsiveness, and Adaptability—and integrate LLM-as-a-judge scoring, expert human evaluation, and semantic/spatial/sequential consistency analysis for multi-dimensional, triple-validated assessment. Experimental results reveal that while LLMs maintain strong geographical coherence in long itineraries, their robustness degrades significantly with increasing itinerary length, exposing a critical fragility under perturbation. This study establishes the first resilience evaluation framework for travel itinerary planning under disruption, offering a novel paradigm and a reproducible benchmark for reliability research in practical LLM deployment.

Technology Category

Application Category

📝 Abstract
Recent efforts like TripCraft and TravelPlanner have advanced the use of Large Language Models ( LLMs) for personalized, constraint aware travel itinerary generation. Yet, real travel often faces disruptions. To address this, we present TripTide, the first benchmark evaluating LLM's ability to revise itineraries under realistic disruptions. TripTide models key dimensions such as disruption severity and traveler tolerance, enabling nuanced assessment of LLM adaptability to events like flight cancellations, weather closures, or overbooked attractions. We conduct a threefold evaluation. First, we introduce automatic metrics including Preservation of Intent (how well the revised plan maintains feasibility and goals), Responsiveness (promptness and appropriateness of disruption handling), and Adaptability (semantic, spatial, and sequential divergence between original and revised plans). Second, we apply an LLM-as-a-judge approach to automatically assess revision quality. Third, we perform manual expert evaluation to verify whether revisions preserve semantic, spatial, sequential, and responsive aspects. Our experiments show that LLMs maintain strong sequential consistency and semantic stability, while spatial deviations are larger for shorter trips but decrease with longer ones, indicating that extended plans encourage better geographic coherence. However, disruption-handling ability declines as plan length increases, highlighting limits in LLM robustness. TripTide establishes a benchmark for evaluating adaptability, personalization, and resilience in LLM-based travel planning under real-world uncertainty.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM adaptability to travel disruptions like cancellations
Assessing itinerary revision quality through automated and expert methods
Benchmarking LLM robustness in maintaining travel goals under uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark evaluates itinerary revisions under disruptions
Uses automatic metrics and LLM-as-a-judge assessment
Combines expert evaluation for multi-dimensional validation
🔎 Similar Papers
No similar papers found.
P
Priyanshu Karmakar
School of Electrical and Computer Sciences, IIT Bhubaneswar, Bhubaneswar, India
S
Soumyabrata Chaudhuri
School of Electrical and Computer Sciences, IIT Bhubaneswar, Bhubaneswar, India
Shubhojit Mallick
Shubhojit Mallick
Microsoft AI
deep learningnatural language processingneuroscience
M
Manish Gupta
Microsoft, India
Abhik Jana
Abhik Jana
Assistant Professor, Department of CSE, IIT Bhubaneswar
NLP
S
Shreya Ghosh
School of Electrical and Computer Sciences, IIT Bhubaneswar, Bhubaneswar, India