More Than Meets the Eye? Uncovering the Reasoning-Planning Disconnect in Training Vision-Language Driving Models

📅 2025-10-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the causal validity of the “reasoning-driven planning” hypothesis in vision-language driving models. To this end, we construct the large-scale DriveMind dataset and propose the “reasoning–planning decoupling” hypothesis. We design a training-free probing framework to quantify model reliance on ego-state and navigation priors. Methodologically, we generate structured inputs from nuPlan, integrate visual question answering with chain-of-thought prompting, and employ supervised fine-tuning with group-wise relative optimization. Causal analysis is conducted via attention visualization and systematic ablation—removing reasoning chains, ego-state priors, or navigation priors. Experiments show that ablating prior knowledge severely degrades planning performance, whereas removing reasoning chains has negligible impact. This demonstrates that current vision-language models predominantly rely on implicit priors—not explicit reasoning—for planning decisions. Our work provides the first empirical evidence of causal disconnection between reasoning and planning in VLM-based driving agents.

Technology Category

Application Category

📝 Abstract
Vision-Language Model (VLM) driving agents promise explainable end-to-end autonomy by first producing natural-language reasoning and then predicting trajectory planning. However, whether planning is causally driven by this reasoning remains a critical but unverified assumption. To investigate this, we build DriveMind, a large-scale driving Visual Question Answering (VQA) corpus with plan-aligned Chain-of-Thought (CoT), automatically generated from nuPlan. Our data generation process converts sensors and annotations into structured inputs and, crucially, separates priors from to-be-reasoned signals, enabling clean information ablations. Using DriveMind, we train representative VLM agents with Supervised Fine-Tuning (SFT) and Group Relative Policy Optimization (GRPO) and evaluate them with nuPlan's metrics. Our results, unfortunately, indicate a consistent causal disconnect in reasoning-planning: removing ego/navigation priors causes large drops in planning scores, whereas removing CoT produces only minor changes. Attention analysis further shows that planning primarily focuses on priors rather than the CoT. Based on this evidence, we propose the Reasoning-Planning Decoupling Hypothesis, positing that the training-yielded reasoning is an ancillary byproduct rather than a causal mediator. To enable efficient diagnosis, we also introduce a novel, training-free probe that measures an agent's reliance on priors by evaluating its planning robustness against minor input perturbations. In summary, we provide the community with a new dataset and a diagnostic tool to evaluate the causal fidelity of future models.
Problem

Research questions and friction points this paper is trying to address.

Investigating causal disconnect between reasoning and planning in VLM driving agents
Evaluating whether trajectory planning is truly driven by language reasoning
Developing diagnostic tools to measure reliance on priors versus reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatically generated driving VQA corpus with plan-aligned reasoning
Training agents with supervised fine-tuning and group policy optimization
Introducing training-free probe to measure reliance on priors
🔎 Similar Papers
No similar papers found.
X
Xurui Song
S-Lab, Nanyang Technological University, Singapore
Shuo Huai
Shuo Huai
Nanyang Technological University
Edge ComputingModel OptimizationIn-Memory Computing
J
JingJing Jiang
College of Computing and Data Science, Nanyang Technological University, Singapore
J
Jiayi Kong
S-Lab, Nanyang Technological University, Singapore
J
Jun Luo
College of Computing and Data Science, Nanyang Technological University, Singapore