🤖 AI Summary
This work investigates the causal validity of the “reasoning-driven planning” hypothesis in vision-language driving models. To this end, we construct the large-scale DriveMind dataset and propose the “reasoning–planning decoupling” hypothesis. We design a training-free probing framework to quantify model reliance on ego-state and navigation priors. Methodologically, we generate structured inputs from nuPlan, integrate visual question answering with chain-of-thought prompting, and employ supervised fine-tuning with group-wise relative optimization. Causal analysis is conducted via attention visualization and systematic ablation—removing reasoning chains, ego-state priors, or navigation priors. Experiments show that ablating prior knowledge severely degrades planning performance, whereas removing reasoning chains has negligible impact. This demonstrates that current vision-language models predominantly rely on implicit priors—not explicit reasoning—for planning decisions. Our work provides the first empirical evidence of causal disconnection between reasoning and planning in VLM-based driving agents.
📝 Abstract
Vision-Language Model (VLM) driving agents promise explainable end-to-end autonomy by first producing natural-language reasoning and then predicting trajectory planning. However, whether planning is causally driven by this reasoning remains a critical but unverified assumption. To investigate this, we build DriveMind, a large-scale driving Visual Question Answering (VQA) corpus with plan-aligned Chain-of-Thought (CoT), automatically generated from nuPlan. Our data generation process converts sensors and annotations into structured inputs and, crucially, separates priors from to-be-reasoned signals, enabling clean information ablations. Using DriveMind, we train representative VLM agents with Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO) and evaluate them with nuPlan's metrics. Our results, unfortunately, indicate a consistent causal disconnect in reasoning-planning: removing ego/navigation priors causes large drops in planning scores, whereas removing CoT produces only minor changes. Attention analysis further shows that planning primarily focuses on priors rather than the CoT. Based on this evidence, we propose the Reasoning-Planning Decoupling Hypothesis, positing that the training-yielded reasoning is an ancillary byproduct rather than a causal mediator. To enable efficient diagnosis, we also introduce a novel, training-free probe that measures an agent's reliance on priors by evaluating its planning robustness against minor input perturbations. In summary, we provide the community with a new dataset and a diagnostic tool to evaluate the causal fidelity of future models.