π€ AI Summary
Existing vision-language model (VLM) safety evaluation benchmarks struggle to encompass complex, dynamic hazardous scenarios, particularly lacking spatiotemporal modeling of moving, intrusive, and distant objects. To address this gap, this work proposes HazardForge, a pipeline integrating image editing models, layout decision algorithms, and a scene validation module to enable the first scalable generation of anomalous driving scenes. Leveraging this pipeline, the authors construct MovSafeBench, a large-scale multiple-choice question-answering benchmark comprising 7,254 images and corresponding QA pairs across 13 categories of dynamic objects. Experimental results reveal a significant performance drop in VLMs under anomalous conditions, especially in tasks requiring fine-grained motion understanding, thereby highlighting critical limitations in current modelsβ capacity for safe decision-making.
π Abstract
Vision Language Models (VLMs) are increasingly deployed in autonomous vehicles and mobile systems, making it crucial to evaluate their ability to support safer decision-making in complex environments. However, existing benchmarks inadequately cover diverse hazardous situations, especially anomalous scenarios with spatio-temporal dynamics. While image editing models are a promising means to synthesize such hazards, it remains challenging to generate well-formulated scenarios that include moving, intrusive, and distant objects frequently observed in the real world. To address this gap, we introduce \textbf{HazardForge}, a scalable pipeline that leverages image editing models to generate these scenarios with layout decision algorithms, and validation modules. Using HazardForge, we construct \textbf{MovSafeBench}, a multiple-choice question (MCQ) benchmark comprising 7,254 images and corresponding QA pairs across 13 object categories, covering both normal and anomalous objects. Experiments using MovSafeBench show that VLM performance degrades notably under conditions including anomalous objects, with the largest drop in scenarios requiring nuanced motion understanding.