OpenMoCap: Rethinking Optical Motion Capture under Real-world Occlusion

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe performance degradation in optical motion capture caused by large-scale marker occlusion, this paper proposes the first robust motion capture framework specifically designed for highly occluded scenarios. First, we introduce CMU-Occlu, a photorealistic occlusion dataset generated via ray tracing, which accurately models real-world occlusion patterns for the first time. Second, we design an end-to-end differentiable marker-to-joint chain inference mechanism that jointly optimizes marker observations and kinematic joint constraints, explicitly capturing long-range dependencies. Third, our method achieves significant improvements over state-of-the-art approaches under multi-occluder conditions and has been successfully integrated into the industrial-grade MoSen system for real-world deployment. Both source code and the CMU-Occlu dataset are publicly released, establishing a new benchmark for robust motion capture research.

Technology Category

Application Category

📝 Abstract
Optical motion capture is a foundational technology driving advancements in cutting-edge fields such as virtual reality and film production. However, system performance suffers severely under large-scale marker occlusions common in real-world applications. An in-depth analysis identifies two primary limitations of current models: (i) the lack of training datasets accurately reflecting realistic marker occlusion patterns, and (ii) the absence of training strategies designed to capture long-range dependencies among markers. To tackle these challenges, we introduce the CMU-Occlu dataset, which incorporates ray tracing techniques to realistically simulate practical marker occlusion patterns. Furthermore, we propose OpenMoCap, a novel motion-solving model designed specifically for robust motion capture in environments with significant occlusions. Leveraging a marker-joint chain inference mechanism, OpenMoCap enables simultaneous optimization and construction of deep constraints between markers and joints. Extensive comparative experiments demonstrate that OpenMoCap consistently outperforms competing methods across diverse scenarios, while the CMU-Occlu dataset opens the door for future studies in robust motion solving. The proposed OpenMoCap is integrated into the MoSen MoCap system for practical deployment. The code is released at: https://github.com/qianchen214/OpenMoCap.
Problem

Research questions and friction points this paper is trying to address.

Addressing performance decline in optical motion capture due to occlusion
Lack of datasets reflecting real-world occlusion patterns in training
Absence of strategies for long-range marker dependency capture
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ray tracing simulates realistic marker occlusion patterns
Marker-joint chain optimizes deep constraints simultaneously
OpenMoCap outperforms methods in occlusion scenarios
🔎 Similar Papers
No similar papers found.
C
Chen Qian
Tsinghua University, Beijing, China
Danyang Li
Danyang Li
Shuimu Scholar, Tsinghua University
Embodied AIMobile ComputingInternet of ThingsEdge ComputingSLAM System
Xinran Yu
Xinran Yu
Unknown affiliation
Z
Zheng Yang
Tsinghua University, Beijing, China
Q
Qiang Ma
Tsinghua University, Beijing, China