AVD2: Accident Video Diffusion for Accident Video Description

๐Ÿ“… 2025-02-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the weak incident scene understanding, insufficient causal attribution, and limited preventive recommendation capabilities in autonomous driving, this paper proposes the first text-to-video generation framework dedicated to traffic incident understanding. Methodologically: (1) we introduce EMM-AU, the first language-video aligned dataset for incident understanding; (2) we design a causal-reasoning-guided, text-conditioned video diffusion model that integrates multimodal alignment training with cross-modal feature disentanglement and fusion. Our key contribution lies in unifying fine-grained natural language descriptions, causal reasoning modeling, and incident video generation within a single generative paradigmโ€”novel in both scope and formulation. Extensive automatic and human evaluations demonstrate state-of-the-art performance, achieving significant improvements in causal attribution accuracy (+18.7%) and preventive recommendation plausibility (+22.3%). This work establishes an interpretable and intervention-capable paradigm for autonomous driving safety analysis.

Technology Category

Application Category

๐Ÿ“ Abstract
Traffic accidents present complex challenges for autonomous driving, often featuring unpredictable scenarios that hinder accurate system interpretation and responses.Nonetheless, prevailing methodologies fall short in elucidating the causes of accidents and proposing preventive measures due to the paucity of training data specific to accident scenarios.In this work, we introduce AVD2 (Accident Video Diffusion for Accident Video Description), a novel framework that enhances accident scene understanding by generating accident videos that aligned with detailed natural language descriptions and reasoning, resulting in the contributed EMM-AU (Enhanced Multi-Modal Accident Video Understanding) dataset. Empirical results reveal that the integration of the EMM-AU dataset establishes state-of-the-art performance across both automated metrics and human evaluations, markedly advancing the domains of accident analysis and prevention. Project resources are available at https://an-answer-tree.github.io
Problem

Research questions and friction points this paper is trying to address.

Enhances accident scene understanding
Generates detailed accident video descriptions
Advances accident analysis and prevention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates accident-aligned detailed video descriptions
Introduces Enhanced Multi-Modal Accident Understanding dataset
Improves accident analysis with natural language reasoning
๐Ÿ”Ž Similar Papers
No similar papers found.
C
Cheng Li
Institute for AI Industry Research (AIR), Tsinghua University; Academy of Interdisciplinary Studies, the Hong Kong University of Science and Technology
K
Keyuan Zhou
Institute for AI Industry Research (AIR), Tsinghua University; College of Communication Engineering, Jilin University
T
Tong Liu
Institute for AI Industry Research (AIR), Tsinghua University; School of Cyber Science and Engineering, Nanjing University of Science and Technology
Y
Yu Wang
Institute for AI Industry Research (AIR), Tsinghua University; School of Automation, Beijing Institute of Technology
M
Mingqiao Zhuang
College of Foreign Language and Literature, Fudan University
Huan-ang Gao
Huan-ang Gao
Ph.D. student, Tsinghua University
AgentVision & Robotics
Bu Jin
Bu Jin
HKUST
3D generationAutonomous DrivingVision-Language Model
H
Hao Zhao
Institute for AI Industry Research (AIR), Tsinghua University; Beijing Academy of Artificial Intelligence (BAAI); Lightwheel AI