OpenMarcie: Dataset for Multimodal Action Recognition in Industrial Environments

๐Ÿ“… 2026-03-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the scarcity of large-scale, multimodal human action recognition datasets in industrial settings, which hinders accurate monitoring and assessment of worker behavior in smart factories. To bridge this gap, we introduce and publicly release OpenMarcieโ€”the largest multimodal action recognition dataset tailored for manufacturing environments. OpenMarcie integrates synchronized wearable sensor data and multi-view video recordings from 36 participants performing natural assembly tasks on bicycles and 3D printers, yielding over 37 hours of data across more than 200 channels spanning eight modalities. Notably, it is the first dataset to support open-ended task descriptions, collaborative behavior modeling, and cross-modal alignment in real-world industrial scenarios. Its utility is validated through three benchmark tasks, establishing it as a foundational resource for advancing human-robot collaboration, safety monitoring, and process optimization in industrial applications.

Technology Category

Application Category

๐Ÿ“ Abstract
Smart factories use advanced technologies to optimize production and increase efficiency. To this end, the recognition of worker activity allows for accurate quantification of performance metrics, improving efficiency holistically while contributing to worker safety. OpenMarcie is, to the best of our knowledge, the biggest multimodal dataset designed for human action monitoring in manufacturing environments. It includes data from wearables sensing modalities and cameras distributed in the surroundings. The dataset is structured around two experimental settings, involving a total of 36 participants. In the first setting, twelve participants perform a bicycle assembly and disassembly task under semi-realistic conditions without a fixed protocol, promoting divergent and goal-oriented problem-solving. The second experiment involves twenty-five volunteers (24 valid data) engaged in a 3D printer assembly task, with the 3D printer manufacturer's instructions provided to guide the volunteers in acquiring procedural knowledge. This setting also includes sequential collaborative assembly, where participants assess and correct each other's progress, reflecting real-world manufacturing dynamics. OpenMarcie includes over 37 hours of egocentric and exocentric, multimodal, and multipositional data, featuring eight distinct data types and more than 200 independent information channels. The dataset is benchmarked across three human activity recognition tasks: activity classification, open vocabulary captioning, and cross-modal alignment.
Problem

Research questions and friction points this paper is trying to address.

multimodal action recognition
industrial environments
human activity recognition
smart factories
dataset
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal dataset
industrial action recognition
egocentric and exocentric vision
wearable sensing
collaborative assembly
๐Ÿ”Ž Similar Papers
No similar papers found.