๐ค AI Summary
To address the weak incident scene understanding, insufficient causal attribution, and limited preventive recommendation capabilities in autonomous driving, this paper proposes the first text-to-video generation framework dedicated to traffic incident understanding. Methodologically: (1) we introduce EMM-AU, the first language-video aligned dataset for incident understanding; (2) we design a causal-reasoning-guided, text-conditioned video diffusion model that integrates multimodal alignment training with cross-modal feature disentanglement and fusion. Our key contribution lies in unifying fine-grained natural language descriptions, causal reasoning modeling, and incident video generation within a single generative paradigmโnovel in both scope and formulation. Extensive automatic and human evaluations demonstrate state-of-the-art performance, achieving significant improvements in causal attribution accuracy (+18.7%) and preventive recommendation plausibility (+22.3%). This work establishes an interpretable and intervention-capable paradigm for autonomous driving safety analysis.
๐ Abstract
Traffic accidents present complex challenges for autonomous driving, often featuring unpredictable scenarios that hinder accurate system interpretation and responses.Nonetheless, prevailing methodologies fall short in elucidating the causes of accidents and proposing preventive measures due to the paucity of training data specific to accident scenarios.In this work, we introduce AVD2 (Accident Video Diffusion for Accident Video Description), a novel framework that enhances accident scene understanding by generating accident videos that aligned with detailed natural language descriptions and reasoning, resulting in the contributed EMM-AU (Enhanced Multi-Modal Accident Video Understanding) dataset. Empirical results reveal that the integration of the EMM-AU dataset establishes state-of-the-art performance across both automated metrics and human evaluations, markedly advancing the domains of accident analysis and prevention. Project resources are available at https://an-answer-tree.github.io