TSHA: A Benchmark for Visual Language Models in Trustworthy Safety Hazard Assessment Scenarios

πŸ“… 2026-03-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing vision-language models struggle to generalize to real-world, complex indoor safety risk assessment due to reliance on synthetic data, oversimplified tasks, and the absence of rigorous evaluation protocols. To address this gap, this work proposes TSHAβ€”the first comprehensive multimodal benchmark specifically designed for real-world home safety risk assessment. TSHA integrates in-situ photographs, web-sourced images, existing datasets, and AI-generated content to construct a heterogeneous, multi-source dataset comprising 81,809 training samples and 1,707 challenging test instances, accompanied by a stringent evaluation protocol. Systematic evaluation of 23 state-of-the-art vision-language models on TSHA reveals that models fine-tuned on this benchmark achieve performance gains of up to 18.3 points and demonstrate significantly improved generalization on other benchmarks, thereby exposing critical limitations of current models in safety-critical applications.
πŸ“ Abstract
Recent advances in vision-language models (VLMs) have accelerated their application to indoor safety hazards assessment. However, existing benchmarks suffer from three fundamental limitations: (1) heavy reliance on synthetic datasets constructed via simulation software, creating a significant domain gap with real-world environments; (2) oversimplified safety tasks with artificial constraints on hazard and scene types, thereby limiting model generalization; and (3) absence of rigorous evaluation protocols to thoroughly assess model capabilities in complex home safety scenarios. To address these challenges, we introduce TSHA (\textbf{T}rustworthy \textbf{S}afety \textbf{H}azards \textbf{A}ssessment), a comprehensive benchmark comprising 81,809 carefully curated training samples drawn from four complementary sources: existing indoor datasets, internet images, AIGC images, and newly captured images. This benchmark set also includes a highly challenging test set with 1707 samples, comprising not only a carefully selected subset from the training distribution but also newly added videos and panoramic images containing multiple safety hazards, used to evaluate the model's robustness in complex safety scenarios. Extensive experiments on 23 popular VLMs demonstrate that current VLMs lack robust capabilities for safety hazard assessment. Importantly, models trained on the TSHA training set not only achieve a significant performance improvement of up to +18.3 points on the TSHA test set but also exhibit enhanced generalizability across other benchmarks, underscoring the substantial contribution and importance of the TSHA benchmark.
Problem

Research questions and friction points this paper is trying to address.

vision-language models
safety hazard assessment
benchmark
domain gap
model generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language models
safety hazard assessment
real-world benchmark
domain gap
robust evaluation
πŸ”Ž Similar Papers
No similar papers found.