๐ค AI Summary
Existing approaches struggle to achieve millisecond-level, high-precision human motion capture in everyday settings due to high costs, substantial bandwidth requirements, and insufficient robustness under low-light conditions. This work proposes FlashCap, a novel system that integrates flickering LED markers with an event camera and fuses multimodal sensing from RGB, LiDAR, and IMU streams to enable motion capture with high temporal resolution. We introduce FlashMotion, the first millisecond-scale multimodal human motion dataset, and present ResPose, a residual learningโbased pose estimation algorithm. Experimental results demonstrate that ResPose reduces pose estimation error by approximately 40%, validating the effectiveness and innovation of both the FlashCap system and the FlashMotion dataset for high-frame-rate, high-accuracy pose estimation.
๐ Abstract
Precise motion timing (PMT) is crucial for swift motion analysis. A millisecond difference may determine victory or defeat in sports competitions. Despite substantial progress in human pose estimation (HPE), PMT remains largely overlooked by the HPE community due to the limited availability of high-temporal-resolution labeled datasets. Today, PMT is achieved using high-speed RGB cameras in specialized scenarios such as the Olympic Games; however, their high costs, light sensitivity, bandwidth, and computational complexity limit their feasibility for daily use. We developed FlashCap, the first flashing LED-based MoCap system for PMT. With FlashCap, we collect a millisecond-resolution human motion dataset, FlashMotion, comprising the event, RGB, LiDAR, and IMU modalities, and demonstrate its high quality through rigorous validation. To evaluate the merits of FlashMotion, we perform two tasks: precise motion timing and high-temporal-resolution HPE. For these tasks, we propose ResPose, a simple yet effective baseline that learns residual poses based on events and RGBs. Experimental results show that ResPose reduces pose estimation errors by ~40% and achieves millisecond-level timing accuracy, enabling new research opportunities. The dataset and code will be shared with the community.