🤖 AI Summary
This work addresses the limitation of current laboratory safety monitoring systems, which rely heavily on manual inspection and lack the capability to automatically recognize structured hazardous scenarios from raw visual inputs alone. To bridge the gap between visual perception and semantic reasoning, the authors propose a scene graph–guided alignment approach that leverages large language models and image generation techniques to construct, for the first time, a synthetic dataset of text–image–scene graph triplets. Experimental results on a dataset comprising 1,207 samples demonstrate that the proposed method significantly enhances the performance of vision-language models in detecting safety hazards without textual prompts, effectively overcoming the challenge of inferring structured relational semantics directly from pixel-level data.
📝 Abstract
Laboratories are prone to severe injuries from minor unsafe actions, yet continuous safety monitoring -- beyond mandatory pre-lab safety training -- is limited by human availability. Vision language models (VLMs) offer promise for autonomous laboratory safety monitoring, but their effectiveness in realistic settings is unclear due to the lack of visual evaluation data, as most safety incidents are documented primarily as unstructured text. To address this gap, we first introduce a structured data generation pipeline that converts textual laboratory scenarios into aligned triples of (image, scene graph, ground truth), using large language models as scene graph architects and image generation models as renderers. Our experiments on the synthetic dataset of 1,207 samples across 362 unique scenarios and seven open- and closed-source models show that VLMs perform effectively given textual scene graph, but degrade substantially in visual-only settings indicating difficulty in extracting structured object relationships directly from pixels. To overcome this, we propose a post-training context-engineering approach, scene-graph-guided alignment, to bridge perceptual gaps in VLMs by translating visual inputs into structured scene graphs better aligned with VLM reasoning, improving hazard detection performance in visual only settings.