🤖 AI Summary
Industrial visual monitoring for assembly tasks faces significant challenges in robust action recognition under marker-free, non-rigid environments; existing approaches rely on fixed workstations or explicit visual markers. This paper proposes ViMAT, the first end-to-end system enabling real-time assembly action recognition without markers or fixed workstation constraints. ViMAT enhances perceptual robustness via multi-view video feature extraction and temporal state modeling; integrates neural perception with symbolic prior knowledge reasoning to address partial observability and visual uncertainty; and employs a lightweight architecture for real-time inference. Evaluated on two real-world production-line tasks—LEGO component replacement and hydraulic press die reconfiguration—ViMAT outperforms mainstream baselines by an average accuracy gain of 12.6%, demonstrating strong practicality and generalization capability in complex industrial settings.
📝 Abstract
Visual monitoring of industrial assembly tasks is critical for preventing equipment damage due to procedural errors and ensuring worker safety. Although commercial solutions exist, they typically require rigid workspace setups or the application of visual markers to simplify the problem. We introduce ViMAT, a novel AI-driven system for real-time visual monitoring of assembly tasks that operates without these constraints. ViMAT combines a perception module that extracts visual observations from multi-view video streams with a reasoning module that infers the most likely action being performed based on the observed assembly state and prior task knowledge. We validate ViMAT on two assembly tasks, involving the replacement of LEGO components and the reconfiguration of hydraulic press molds, demonstrating its effectiveness through quantitative and qualitative analysis in challenging real-world scenarios characterized by partial and uncertain visual observations. Project page: https://tev-fbk.github.io/ViMAT