Prompt to Protection: A Comparative Study of Multimodal LLMs in Construction Hazard Recognition

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current construction safety research lacks standardized, cross-model benchmarking of multimodal large language models (MLLMs) for hazard identification in real-world construction site imagery. Method: We conduct the first systematic evaluation of five state-of-the-art MLLMs—Claude-3 Opus, GPT-4.5, GPT-4o, GPT-o3, and Gemini 2.0 Pro—using zero-shot, few-shot, and chain-of-thought (CoT) prompting strategies on authentic construction safety images. Contribution/Results: CoT prompting significantly enhances performance across all models; GPT-4.5 and GPT-o3 achieve the highest overall accuracy, with an F1-score of 78.3%. Prompt engineering critically governs model reliability and output consistency. This work establishes the first reproducible, multimodal LLM benchmark framework for construction visual safety analysis and provides empirically grounded guidelines for prompt optimization in safety-critical vision-language applications.

Technology Category

Application Category

📝 Abstract
The recent emergence of multimodal large language models (LLMs) has introduced new opportunities for improving visual hazard recognition on construction sites. Unlike traditional computer vision models that rely on domain-specific training and extensive datasets, modern LLMs can interpret and describe complex visual scenes using simple natural language prompts. However, despite growing interest in their applications, there has been limited investigation into how different LLMs perform in safety-critical visual tasks within the construction domain. To address this gap, this study conducts a comparative evaluation of five state-of-the-art LLMs: Claude-3 Opus, GPT-4.5, GPT-4o, GPT-o3, and Gemini 2.0 Pro, to assess their ability to identify potential hazards from real-world construction images. Each model was tested under three prompting strategies: zero-shot, few-shot, and chain-of-thought (CoT). Zero-shot prompting involved minimal instruction, few-shot incorporated basic safety context and a hazard source mnemonic, and CoT provided step-by-step reasoning examples to scaffold model thinking. Quantitative analysis was performed using precision, recall, and F1-score metrics across all conditions. Results reveal that prompting strategy significantly influenced performance, with CoT prompting consistently producing higher accuracy across models. Additionally, LLM performance varied under different conditions, with GPT-4.5 and GPT-o3 outperforming others in most settings. The findings also demonstrate the critical role of prompt design in enhancing the accuracy and consistency of multimodal LLMs for construction safety applications. This study offers actionable insights into the integration of prompt engineering and LLMs for practical hazard recognition, contributing to the development of more reliable AI-assisted safety systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLMs' hazard recognition in construction using images
Compares five LLMs under different prompting strategies
Assesses impact of prompt design on model accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal LLMs interpret visual scenes via prompts
Comparative evaluation of five advanced LLMs
Chain-of-thought prompting enhances hazard recognition accuracy
🔎 Similar Papers
No similar papers found.
N
Nishi Chaudhary
Department of Construction Management, Colorado State University
S
S. J. Uddin
Department of Construction Management, Colorado State University
S
Sathvik Sharath Chandra
Department of Civil Engineering Dayananda Sagar College of Engineering Bengaluru, Karnataka, India
Anto Ovid
Anto Ovid
PhD in Civil Engineering, North Carolina State University
Construction SafetyBIMAIRoboticsWearable Sensors
Alex Albert
Alex Albert
Associate Professor, North Carolina State University
Construction SafetyInjury preventionRisk ManagementHazard RecognitionSafety Interventions