🤖 AI Summary
Prior research largely overlooks engineering challenges—such as roadside infrastructure adaptation and multi-sensor fusion—in deploying proactive safety systems within real-world road environments, especially work zones.
Method: This project proposes an edge–cloud协同 intelligent traffic safety system integrating LiDAR, millimeter-wave radar, and cameras. Leveraging an edge computing platform, it enables real-time multi-source data calibration, trajectory-level sensor fusion, and collision conflict prediction. A novel predictive digital twin framework is developed to support early-risk identification and proactive alerting.
Results: The system was deployed and validated in the N-2/US-75 corridor bridge maintenance zone in Nebraska, achieving high-accuracy, real-time hazard warnings. It systematically documents a three-stage implementation methodology—sensor selection, calibration, and fusion—filling a critical gap in the engineering deployment of multimodal proactive safety systems in complex construction zones. The work delivers a reusable technical pathway and practical paradigm for real-world intelligent transportation safety systems.
📝 Abstract
Proactive safety systems that anticipate and mitigate traffic risks before incidents occur are increasingly recognized as essential for improving work zone safety. Unlike traditional reactive methods, these systems rely on real-time sensing, trajectory prediction, and intelligent infrastructure to detect potential hazards. Existing simulation-based studies often overlook, and real-world deployment studies rarely discuss the practical challenges associated with deploying such systems in operational settings, particularly those involving roadside infrastructure and multi-sensor integration and fusion. This study addresses that gap by presenting deployment insights and technical lessons learned from a real-world implementation of a multi-sensor safety system at an active bridge repair work zone along the N-2/US-75 corridor in Lincoln, Nebraska. The deployed system combines LiDAR, radar, and camera sensors with an edge computing platform to support multi-modal object tracking, trajectory fusion, and real-time analytics. Specifically, this study presents key lessons learned across three critical stages of deployment: (1) sensor selection and placement, (2) sensor calibration, system integration, and validation, and (3) sensor fusion. Additionally, we propose a predictive digital twin framework that leverages fused trajectory data for early conflict detection and real-time warning generation, enabling proactive safety interventions.