DeepTravel: An End-to-End Agentic Reinforcement Learning Framework for Autonomous Travel Planning Agents

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing travel planning agents rely on handcrafted prompts and rigid pipelines, exhibiting limited autonomy and failing to establish a closed-loop of planning–execution–reflection in multi-step reasoning. This paper proposes an end-to-end agent reinforcement learning framework featuring (1) hierarchical reward modeling—supporting both trajectory-level consistency and episode-level correctness validation—and (2) response-augmented RL with failure experience replay. We construct a sandbox environment with toolized APIs for transportation, accommodation, and points of interest (POIs), enabling stable, efficient multi-step tool invocation and reflective optimization. Deployed in real-world Didi scenarios, our framework significantly enhances travel planning capabilities of compact models (e.g., Qwen3-32B), outperforming state-of-the-art large language models—including OpenAI o1/o3 and DeepSeek R1—on key operational metrics.

Technology Category

Application Category

📝 Abstract
Travel planning (TP) agent has recently worked as an emerging building block to interact with external tools and resources for travel itinerary generation, ensuring enjoyable user experience. Despite its benefits, existing studies rely on hand craft prompt and fixed agent workflow, hindering more flexible and autonomous TP agent. This paper proposes DeepTravel, an end to end agentic reinforcement learning framework for building autonomous travel planning agent, capable of autonomously planning, executing tools, and reflecting on tool responses to explore, verify, and refine intermediate actions in multi step reasoning. To achieve this, we first construct a robust sandbox environment by caching transportation, accommodation and POI data, facilitating TP agent training without being constrained by real world APIs limitations (e.g., inconsistent outputs). Moreover, we develop a hierarchical reward modeling system, where a trajectory level verifier first checks spatiotemporal feasibility and filters unsatisfied travel itinerary, and then the turn level verifier further validate itinerary detail consistency with tool responses, enabling efficient and precise reward service. Finally, we propose the reply augmented reinforcement learning method that enables TP agent to periodically replay from a failures experience buffer, emerging notable agentic capacity. We deploy trained TP agent on DiDi Enterprise Solutions App and conduct comprehensive online and offline evaluations, demonstrating that DeepTravel enables small size LLMs (e.g., Qwen3 32B) to significantly outperform existing frontier LLMs such as OpenAI o1, o3 and DeepSeek R1 in travel planning tasks.
Problem

Research questions and friction points this paper is trying to address.

Developing autonomous travel planning agents with flexible workflows
Creating robust training environments without real API limitations
Enhancing small LLMs to outperform frontier models in planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end reinforcement learning framework for autonomous travel planning
Hierarchical reward modeling system for efficient itinerary validation
Reply-augmented reinforcement learning with failure experience replay
🔎 Similar Papers
No similar papers found.
Y
Yansong Ning
The Hong Kong University of Science and Technology (Guangzhou)
R
Rui Liu
Didichuxing Co. Ltd
J
Jun Wang
Didichuxing Co. Ltd
K
Kai Chen
Didichuxing Co. Ltd
W
Wei Li
Didichuxing Co. Ltd
J
Jun Fang
Didichuxing Co. Ltd
Kan Zheng
Kan Zheng
IEEE Fellow, Ningbo University
IoV5G/ 6G
N
Naiqiang Tan
Didichuxing Co. Ltd
H
Hao Liu
The Hong Kong University of Science and Technology (Guangzhou)