LiloDriver: A Lifelong Learning Framework for Closed-loop Motion Planning in Long-tail Autonomous Driving Scenarios

πŸ“… 2025-05-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the weak generalization and poor knowledge transfer of motion planners in long-tail autonomous driving scenarios, this paper proposes the first lifelong-learning-enabled closed-loop motion planning framework. Methodologically, it introduces a memory-augmented large language model (LLM)-assisted planning architecture that enables zero-shot continual adaptation without retraining; establishes an end-to-end closed-loop pipeline comprising perception, encoding, memory refinement, and LLM-based reasoning; and integrates a structured scene memory bank, an interpretable motion planner, and the nuPlan simulation-based evaluation system. Experiments on the nuPlan benchmark demonstrate a 12.7% improvement in overall success rate across both common and long-tail scenarios, significantly outperforming static rule-based and state-of-the-art learning-based planners. The framework validates human-like interpretability, strong generalization to rare scenarios, and engineering scalability.

Technology Category

Application Category

πŸ“ Abstract
Recent advances in autonomous driving research towards motion planners that are robust, safe, and adaptive. However, existing rule-based and data-driven planners lack adaptability to long-tail scenarios, while knowledge-driven methods offer strong reasoning but face challenges in representation, control, and real-world evaluation. To address these challenges, we present LiloDriver, a lifelong learning framework for closed-loop motion planning in long-tail autonomous driving scenarios. By integrating large language models (LLMs) with a memory-augmented planner generation system, LiloDriver continuously adapts to new scenarios without retraining. It features a four-stage architecture including perception, scene encoding, memory-based strategy refinement, and LLM-guided reasoning. Evaluated on the nuPlan benchmark, LiloDriver achieves superior performance in both common and rare driving scenarios, outperforming static rule-based and learning-based planners. Our results highlight the effectiveness of combining structured memory and LLM reasoning to enable scalable, human-like motion planning in real-world autonomous driving. Our code is available at https://github.com/Hyan-Yao/LiloDriver.
Problem

Research questions and friction points this paper is trying to address.

Adapting to long-tail autonomous driving scenarios
Integrating LLMs for continuous learning without retraining
Improving motion planning performance in rare driving situations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LLMs with memory-augmented planner generation
Four-stage architecture for continuous adaptation
Combines structured memory and LLM reasoning
πŸ”Ž Similar Papers
No similar papers found.
H
Huaiyuan Yao
Xi’an Jiaotong University, China
P
Pengfei Li
Institute for AI Industry Research (AIR), Tsinghua University, China
Bu Jin
Bu Jin
HKUST
3D generationAutonomous DrivingVision-Language Model
Yupeng Zheng
Yupeng Zheng
Institute of Automation, Chinese Academy of Sciences
A
An Liu
Department of Computer Science and Technology, Tsinghua University, China
L
Lisen Mu
Horizon Robotics, China
Qing Su
Qing Su
University of Connecticut
Machine LearningComputer VisionRepresentation Learning
Q
Qian Zhang
Horizon Robotics, China
Y
Yilun Chen
Institute for AI Industry Research (AIR), Tsinghua University, China
P
Peng Li
Institute for AI Industry Research (AIR), Tsinghua University, China