π€ AI Summary
To address the weak generalization and poor knowledge transfer of motion planners in long-tail autonomous driving scenarios, this paper proposes the first lifelong-learning-enabled closed-loop motion planning framework. Methodologically, it introduces a memory-augmented large language model (LLM)-assisted planning architecture that enables zero-shot continual adaptation without retraining; establishes an end-to-end closed-loop pipeline comprising perception, encoding, memory refinement, and LLM-based reasoning; and integrates a structured scene memory bank, an interpretable motion planner, and the nuPlan simulation-based evaluation system. Experiments on the nuPlan benchmark demonstrate a 12.7% improvement in overall success rate across both common and long-tail scenarios, significantly outperforming static rule-based and state-of-the-art learning-based planners. The framework validates human-like interpretability, strong generalization to rare scenarios, and engineering scalability.
π Abstract
Recent advances in autonomous driving research towards motion planners that are robust, safe, and adaptive. However, existing rule-based and data-driven planners lack adaptability to long-tail scenarios, while knowledge-driven methods offer strong reasoning but face challenges in representation, control, and real-world evaluation. To address these challenges, we present LiloDriver, a lifelong learning framework for closed-loop motion planning in long-tail autonomous driving scenarios. By integrating large language models (LLMs) with a memory-augmented planner generation system, LiloDriver continuously adapts to new scenarios without retraining. It features a four-stage architecture including perception, scene encoding, memory-based strategy refinement, and LLM-guided reasoning. Evaluated on the nuPlan benchmark, LiloDriver achieves superior performance in both common and rare driving scenarios, outperforming static rule-based and learning-based planners. Our results highlight the effectiveness of combining structured memory and LLM reasoning to enable scalable, human-like motion planning in real-world autonomous driving. Our code is available at https://github.com/Hyan-Yao/LiloDriver.