🤖 AI Summary
To address performance degradation of helmet-mounted pose estimation systems in challenging industrial, construction, and emergency-response environments—characterized by smoke, dust, and low-texture surfaces—this work introduces the first publicly available helmet-mounted IMU dataset annotated with high-accuracy motion-capture ground truth, capturing head motions from 10 subjects across diverse scenarios. We propose a novel hybrid architecture integrating LSTM and Transformer modules for joint bias correction and pose estimation, enabling robust long-sequence modeling and comprehensive evaluation across temporal windows, motion patterns, and sensor modalities. Experiments demonstrate substantial improvements in head-pose estimation accuracy over prior methods. The dataset, source code, and standardized evaluation protocol are fully open-sourced, establishing the first reproducible performance baseline for this task. This contribution provides critical infrastructure for data-driven, robust head-pose estimation in real-world adversarial conditions.
📝 Abstract
Helmet-mounted wearable positioning systems are crucial for enhancing safety and facilitating coordination in industrial, construction, and emergency rescue environments. These systems, including LiDAR-Inertial Odometry (LIO) and Visual-Inertial Odometry (VIO), often face challenges in localization due to adverse environmental conditions such as dust, smoke, and limited visual features. To address these limitations, we propose a novel head-mounted Inertial Measurement Unit (IMU) dataset with ground truth, aimed at advancing data-driven IMU pose estimation. Our dataset captures human head motion patterns using a helmet-mounted system, with data from ten participants performing various activities. We explore the application of neural networks, specifically Long Short-Term Memory (LSTM) and Transformer networks, to correct IMU biases and improve localization accuracy. Additionally, we evaluate the performance of these methods across different IMU data window dimensions, motion patterns, and sensor types. We release a publicly available dataset, demonstrate the feasibility of advanced neural network approaches for helmet-based localization, and provide evaluation metrics to establish a baseline for future studies in this field. Data and code can be found at https://lqiutong.github.io/HelmetPoser.github.io/.