PDB-Eval: An Evaluation of Large Multimodal Models for Description and Explanation of Personalized Driving Behavior

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing driving behavior understanding datasets lack the capability to model personalized motion driven by external visual evidence. This work introduces PDB-Eval, the first fine-grained, external-view evaluation benchmark explicitly designed for interpreting drivers’ internal states. Methodologically, we propose a dual-component architecture—PDB-X (scene understanding) and PDB-QA (explainable visual question answering)—that integrates temporal multimodal reasoning, instruction tuning, and large vision-language models to achieve end-to-end, interpretable alignment among driving intent, behavior, and risk. Compared to baselines, our approach improves zero-shot visual QA accuracy by 73.2%, boosts intent prediction by 12.5% on Brain4Cars and behavior recognition by 11.0% on AIDE, and significantly enhances model generalizability and explainable reasoning over personalized driving behaviors.

Technology Category

Application Category

📝 Abstract
Understanding a driver's behavior and intentions is important for potential risk assessment and early accident prevention. Safety and driver assistance systems can be tailored to individual drivers' behavior, significantly enhancing their effectiveness. However, existing datasets are limited in describing and explaining general vehicle movements based on external visual evidence. This paper introduces a benchmark, PDB-Eval, for a detailed understanding of Personalized Driver Behavior, and aligning Large Multimodal Models (MLLMs) with driving comprehension and reasoning. Our benchmark consists of two main components, PDB-X and PDB-QA. PDB-X can evaluate MLLMs' understanding of temporal driving scenes. Our dataset is designed to find valid visual evidence from the external view to explain the driver's behavior from the internal view. To align MLLMs' reasoning abilities with driving tasks, we propose PDB-QA as a visual explanation question-answering task for MLLM instruction fine-tuning. As a generic learning task for generative models like MLLMs, PDB-QA can bridge the domain gap without harming MLLMs' generalizability. Our evaluation indicates that fine-tuning MLLMs on fine-grained descriptions and explanations can effectively bridge the gap between MLLMs and the driving domain, which improves zero-shot performance on question-answering tasks by up to 73.2%. We further evaluate the MLLMs fine-tuned on PDB-X in Brain4Cars' intention prediction and AIDE's recognition tasks. We observe up to 12.5% performance improvements on the turn intention prediction task in Brain4Cars, and consistent performance improvements up to 11.0% on all tasks in AIDE.
Problem

Research questions and friction points this paper is trying to address.

Evaluating Large Multimodal Models for driver behavior understanding
Bridging domain gap in driving comprehension with MLLMs
Improving zero-shot performance in driving-related QA tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces PDB-Eval benchmark for driver behavior analysis
Proposes PDB-QA for MLLM instruction fine-tuning
Improves zero-shot QA performance by 73.2%
🔎 Similar Papers
No similar papers found.