🤖 AI Summary
This work addresses a critical gap in autonomous driving evaluation, which has predominantly focused on perception while neglecting decision-making capabilities of Vision-Language Models (VLMs). We propose the first decision-centric, progressive benchmark comprising three hierarchical levels—object, scene, and decision—with 6,650 carefully curated questions to systematically assess VLMs’ boundaries in transitioning from perception to decision-making and their reasoning interpretability. By establishing an interpretable evaluation framework, we reveal a weak correlation between perception and decision performance and introduce an analyzer model enabling large-scale automatic annotation. Our analysis identifies key failure modes of prevailing VLMs in autonomous driving decision tasks, offering both a rigorous evaluation foundation and actionable directions for developing safer, more reliable models.
📝 Abstract
Autonomous driving is a highly challenging domain that requires reliable perception and safe decision-making in complex scenarios. Recent vision-language models (VLMs) demonstrate reasoning and generalization abilities, opening new possibilities for autonomous driving; however, existing benchmarks and metrics overemphasize perceptual competence and fail to adequately assess decision-making processes. In this work, we present AutoDriDM, a decision-centric, progressive benchmark with 6,650 questions across three dimensions - Object, Scene, and Decision. We evaluate mainstream VLMs to delineate the perception-to-decision capability boundary in autonomous driving, and our correlation analysis reveals weak alignment between perception and decision-making performance. We further conduct explainability analyses of models'reasoning processes, identifying key failure modes such as logical reasoning errors, and introduce an analyzer model to automate large-scale annotation. AutoDriDM bridges the gap between perception-centered and decision-centered evaluation, providing guidance toward safer and more reliable VLMs for real-world autonomous driving.