Towards Safer and Understandable Driver Intention Prediction

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of interpretability in driver intention prediction for autonomous driving, this paper proposes the Video Concept Bottleneck Model (VCBM) for prospective and interpretable pre-action intention inference. Methodologically, VCBM fuses multimodal inputs—driver eye-tracking signals and ego-vehicle-view video—within a Transformer-based architecture to learn spatiotemporally coherent, concept-level representations. It further incorporates hierarchical textual explanations and multi-label t-SNE visualization to enable causal reasoning and disentanglement of explanatory factors. To support this work, we introduce DAAD-X, the first benchmark dataset featuring synchronized eye-tracking trajectories and fine-grained natural-language explanations. Experiments demonstrate that VCBM significantly enhances prediction interpretability; Transformers outperform CNNs in concept disentanglement and causal analysis; and the model achieves state-of-the-art performance in both intention classification accuracy and explanation consistency.

Technology Category

Application Category

📝 Abstract
Autonomous driving (AD) systems are becoming increasingly capable of handling complex tasks, mainly due to recent advances in deep learning and AI. As interactions between autonomous systems and humans increase, the interpretability of decision-making processes in driving systems becomes increasingly crucial for ensuring safe driving operations. Successful human-machine interaction requires understanding the underlying representations of the environment and the driving task, which remains a significant challenge in deep learning-based systems. To address this, we introduce the task of interpretability in maneuver prediction before they occur for driver safety, i.e., driver intent prediction (DIP), which plays a critical role in AD systems. To foster research in interpretable DIP, we curate the eXplainable Driving Action Anticipation Dataset (DAAD-X), a new multimodal, ego-centric video dataset to provide hierarchical, high-level textual explanations as causal reasoning for the driver's decisions. These explanations are derived from both the driver's eye-gaze and the ego-vehicle's perspective. Next, we propose Video Concept Bottleneck Model (VCBM), a framework that generates spatio-temporally coherent explanations inherently, without relying on post-hoc techniques. Finally, through extensive evaluations of the proposed VCBM on the DAAD-X dataset, we demonstrate that transformer-based models exhibit greater interpretability than conventional CNN-based models. Additionally, we introduce a multilabel t-SNE visualization technique to illustrate the disentanglement and causal correlation among multiple explanations. Our data, code and models are available at: https://mukil07.github.io/VCBM.github.io/
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability of driver intention prediction for autonomous driving safety
Addressing black-box decision-making in deep learning-based autonomous systems
Providing causal reasoning explanations for driver actions through multimodal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Developed multimodal dataset with hierarchical textual explanations
Proposed video concept bottleneck model for coherent explanations
Introduced multilabel visualization technique for causal correlation
🔎 Similar Papers
No similar papers found.