🤖 AI Summary
To address the insufficient decision-making robustness of autonomous driving (AD) systems in partially observable and complex real-world scenarios, this paper proposes a novel paradigm that distills the commonsense reasoning capabilities of vision-language models (VLMs) into a modular end-to-end driving stack. Unlike direct VLM invocation—which incurs high computational overhead and compromises safety decomposability—our method, during training, aligns cross-modal latent representations to inject structured textual reasoning features (generated by a VLM) into intermediate representations of perception, prediction, and planning modules, augmented with text-guided differentiable regularization. This enables the first lossless transfer of VLM-derived commonsense knowledge to lightweight, modular, and formally verifiable AD systems. On NuScenes, our approach reduces trajectory prediction ℓ₂ error by 10%, maintains millisecond-level inference latency, and cuts memory footprint by over 99%, significantly outperforming existing end-to-end baselines.
📝 Abstract
While autonomous driving (AD) stacks struggle with decision making under partial observability and real-world complexity, human drivers are capable of commonsense reasoning to make near-optimal decisions with limited information. Recent work has attempted to leverage finetuned Vision-Language Models (VLMs) for trajectory planning at inference time to emulate human behavior. Despite their success in benchmark evaluations, these methods are often impractical to deploy (a 70B parameter VLM inference at merely 8 tokens per second requires more than 160G of memory), and their monolithic network structure prohibits safety decomposition. To bridge this gap, we propose VLM-Embedded Reasoning for autonomous Driving (VERDI), a training-time framework that distills the reasoning process and commonsense knowledge of VLMs into the AD stack. VERDI augments modular differentiable end-to-end (e2e) AD models by aligning intermediate module outputs at the perception, prediction, and planning stages with text features explaining the driving reasoning process produced by VLMs. By encouraging alignment in latent space, extsc{VERDI} enables the modular AD stack to internalize structured reasoning, without incurring the inference-time costs of large VLMs. We demonstrate the effectiveness of our method on the NuScenes dataset and find that VERDI outperforms existing e2e methods that do not embed reasoning by 10% in $ell_{2}$ distance, while maintaining high inference speed.