🤖 AI Summary
To address the low efficiency, high false-positive rate, and poor adaptability to dynamic scenarios in manual safety inspections in mining operations, this paper proposes a vision-language model tailored for unsafe behavior recognition. We construct a domain-specific dataset comprising 9,000 annotated samples of safety violations. Methodologically, we introduce a Dynamic Clause Filtering module—reducing inference latency by 13.56%—and a Behavior Amplification module to enhance fine-grained action modeling. Our approach integrates Visual Question Answering (VQA)-based training, Top-K clause filtering, region-aware feature enhancement, and multi-source auxiliary cues. The method enables timestamped automatic violation detection on real-world surveillance video streams. Compared to an un-fine-tuned 72B baseline, it achieves absolute improvements of 22.01%, 34.22%, and 28.37% in precision, recall, and F1-score, respectively. Furthermore, we validate deployment feasibility via a lightweight web interface.
📝 Abstract
Industrial accidents, particularly in high-risk domains such as surface and underground mining, are frequently caused by unsafe worker behaviors. Traditional manual inspection remains labor-intensive, error-prone, and insufficient for large-scale, dynamic environments, highlighting the urgent need for intelligent and automated safety monitoring. In this paper, we present MonitorVLM, a novel vision--language framework designed to detect safety violations directly from surveillance video streams. MonitorVLM introduces three key innovations: (1) a domain-specific violation dataset comprising 9,000 vision--question--answer (VQA) samples across 40 high-frequency mining regulations, enriched with augmentation and auxiliary detection cues; (2) a clause filter (CF) module that dynamically selects the Top-$K$ most relevant clauses, reducing inference latency by 13.56% while maintaining accuracy; and (3) a behavior magnifier (BM) module that enhances worker regions to improve fine-grained action recognition, yielding additional gains of 3.45% in precision and 8.62% in recall. Experimental results demonstrate that MonitorVLM significantly outperforms baseline vision--language models, achieving improvements of 22.01% in precision, 34.22% in recall, and 28.37% in F1 score over the 72B unfine-tuned baseline. A lightweight web-based interface further integrates MonitorVLM into practical workflows, enabling automatic violation reporting with video timestamping. This study highlights the potential of multimodal large models to enhance occupational safety monitoring in mining and beyond.