From Dashcam Videos to Driving Simulations: Stress Testing Automated Vehicles against Rare Events

📅 2024-11-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Testing autonomous driving systems (ADS) in rare hazardous scenarios is challenging due to difficulty in identifying such cases and high costs of manual scenario construction. Method: This paper proposes a fully automated video-to-simulation framework. It introduces the first prompt-engineering-based video-language model (VLM) to generate SCENIC scenario scripts; designs a driving-behavior feature similarity metric for event-level fidelity optimization via iterative refinement; and supports parametric variations (e.g., weather, road conditions) to enable search-based stress testing. Implemented on CARLA, the end-to-end pipeline completes conversion within minutes without human intervention. Contribution/Results: Experiments demonstrate significant improvements in both fidelity of rare-scenario reproduction and test coverage. The framework establishes an efficient, scalable, and fully automated testing paradigm for ADS robustness validation, substantially reducing reliance on manual effort while enhancing scenario diversity and realism.

Technology Category

Application Category

📝 Abstract
Testing Automated Driving Systems (ADS) in simulation with realistic driving scenarios is important for verifying their performance. However, converting real-world driving videos into simulation scenarios is a significant challenge due to the complexity of interpreting high-dimensional video data and the time-consuming nature of precise manual scenario reconstruction. In this work, we propose a novel framework that automates the conversion of real-world car crash videos into detailed simulation scenarios for ADS testing. Our approach leverages prompt-engineered Video Language Models(VLM) to transform dashcam footage into SCENIC scripts, which define the environment and driving behaviors in the CARLA simulator, enabling the generation of realistic simulation scenarios. Importantly, rather than solely aiming for one-to-one scenario reconstruction, our framework focuses on capturing the essential driving behaviors from the original video while offering flexibility in parameters such as weather or road conditions to facilitate search-based testing. Additionally, we introduce a similarity metric that helps iteratively refine the generated scenario through feedback by comparing key features of driving behaviors between the real and simulated videos. Our preliminary results demonstrate substantial time efficiency, finishing the real-to-sim conversion in minutes with full automation and no human intervention, while maintaining high fidelity to the original driving events.
Problem

Research questions and friction points this paper is trying to address.

Real-world Driving Video Conversion
Simulation for Autonomous Driving
Rare and Challenging Scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autonomous Driving Simulation
Video-to-Text Modeling
Behavior Fidelity
🔎 Similar Papers
No similar papers found.