A Survey of Reinforcement Learning-Based Motion Planning for Autonomous Driving: Lessons Learned from a Driving Task Perspective

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of systematic guidance in reinforcement learning (RL) design for autonomous driving motion planning. Methodologically, it proposes a task-driven RL-based Motion Planning (RL-MoP) taxonomy and design paradigm, structuring RL modeling around driving task categories and integrating Markov decision processes, policy gradient methods, multi-agent RL, hierarchical RL, and simulation-to-reality transfer techniques into a reusable framework. Key contributions include: (1) distilling twelve RL design principles tailored to canonical driving tasks; (2) identifying six frontier challenges—e.g., sparse rewards, safety-critical constraints, and dynamic interaction—and proposing implementable mitigation strategies; and (3) establishing cross-scenario transferable design heuristics and practical implementation guidelines. The framework significantly enhances the robustness and interpretability of autonomous decision-making systems while supporting scalable, task-aware RL deployment in complex urban environments.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL), with its ability to explore and optimize policies in complex, dynamic decision-making tasks, has emerged as a promising approach to addressing motion planning (MoP) challenges in autonomous driving (AD). Despite rapid advancements in RL and AD, a systematic description and interpretation of the RL design process tailored to diverse driving tasks remains underdeveloped. This survey provides a comprehensive review of RL-based MoP for AD, focusing on lessons from task-specific perspectives. We first outline the fundamentals of RL methodologies, and then survey their applications in MoP, analyzing scenario-specific features and task requirements to shed light on their influence on RL design choices. Building on this analysis, we summarize key design experiences, extract insights from various driving task applications, and provide guidance for future implementations. Additionally, we examine the frontier challenges in RL-based MoP, review recent efforts to addresse these challenges, and propose strategies for overcoming unresolved issues.
Problem

Research questions and friction points this paper is trying to address.

RL-based motion planning for autonomous driving challenges
Systematic RL design for diverse driving tasks lacking
Addressing frontier challenges in RL-based motion planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL optimizes policies for complex driving tasks
Task-specific RL design for diverse scenarios
Addressing frontier challenges in motion planning
🔎 Similar Papers
No similar papers found.
Zhuoren Li
Zhuoren Li
Ph.D. Candidate
autonomous vehiclesintelligent transportationmotion planningreinforcement learning
G
Guizhe Jin
School of Automotive Studies, Tongji University, Shanghai 201804, China
Ran Yu
Ran Yu
GESIS – Leibniz Institute for the Social Sciences
knowledge graphsearch as learninginformation retrievalspatial-temporal data analysis
Z
Zhiwen Chen
School of Automotive Studies, Tongji University, Shanghai 201804, China
N
Nan Li
School of Automotive Studies, Tongji University, Shanghai 201804, China
W
Wei Han
School of Automotive Studies, Tongji University, Shanghai 201804, China
L
Lu Xiong
School of Automotive Studies, Tongji University, Shanghai 201804, China
B
Bo Leng
School of Automotive Studies, Tongji University, Shanghai 201804, China
Jia Hu
Jia Hu
University of Exeter
edge-cloud computingresource optimizationsmart citynetwork securityapplied machine learning
Ilya Kolmanovsky
Ilya Kolmanovsky
Professor of Aerospace Engineering, University of Michigan
controlautomotiveaerospace
D
Dimitar Filev
Hagler Institute for Advanced Study, Texas A&M University, College Station, TX 77840 USA