🤖 AI Summary
Existing vision-language models struggle to jointly reason about perception, prediction, and planning in end-to-end autonomous driving, often compromising decision-making performance due to the omission of intermediate reasoning stages. This work proposes the P³-CoT framework, which unifies these three stages through a structured chain-of-thought reasoning process. To enhance both efficiency and robustness, the authors introduce a hierarchical reinforcement learning algorithm, P³-GRPO, along with a dual-mode “deliberate–reactive” mechanism that balances thorough analysis with rapid response. Evaluated on the newly curated P³-CoT dataset, the method achieves state-of-the-art performance in motion planning on nuScenes and NAVSIMv1/v2 benchmarks, significantly improving the safety and interpretability of autonomous driving systems.
📝 Abstract
Vision-language models (VLMs) are increasingly being adopted for end-to-end autonomous driving systems due to their exceptional performance in handling long-tail scenarios. However, current VLM-based approaches suffer from two major limitations: 1) Some VLMs directly output planning results without chain-of-thought (CoT) reasoning, bypassing crucial perception and prediction stages which creates a significant domain gap and compromises decision-making capability; 2) Other VLMs can generate outputs for perception, prediction, and planning tasks but employ a fragmented decision-making approach where these modules operate separately, leading to a significant lack of synergy that undermines true planning performance. To address these limitations, we propose ${AutoDrive\text{-}P^3}$, a novel framework that seamlessly integrates $\textbf{P}$erception, $\textbf{P}$rediction, and $\textbf{P}$lanning through structured reasoning. We introduce the ${P^3\text{-}CoT}$ dataset to facilitate coherent reasoning and propose ${P^3\text{-}GRPO}$, a hierarchical reinforcement learning algorithm that provides progressive supervision across all three tasks. Specifically, ${AutoDrive\text{-}P^3}$ progressively generates CoT reasoning and answers for perception, prediction, and planning, where perception provides essential information for subsequent prediction and planning, while both perception and prediction collectively contribute to the final planning decisions, enabling safer and more interpretable autonomous driving. Additionally, to balance inference efficiency with performance, we introduce dual thinking modes: detailed thinking and fast thinking. Extensive experiments on both open-loop (nuScenes) and closed-loop (NAVSIMv1/v2) benchmarks demonstrate that our approach achieves state-of-the-art performance in planning tasks. Code is available at https://github.com/haha-yuki-haha/AutoDrive-P3.