🤖 AI Summary
This study addresses the lack of reliable uncertainty quantification in existing markerless multi-view motion capture systems for clinical gait analysis, which hinders the assessment of prediction confidence in individual trials. The authors propose the first probabilistic, variational inference–based multi-view markerless motion capture model that estimates calibrated uncertainties by modeling the posterior distribution over joint angles. This approach autonomously identifies unreliable predictions without requiring synchronized ground-truth instrumentation. Experimental results demonstrate median errors of 16 mm for step length and 12 mm for stride length, with lower-limb joint angle errors ranging from 1.5° to 3.8°. The expected calibration error (ECE) consistently remains below 0.1, indicating strong alignment between predicted uncertainties and actual errors.
📝 Abstract
Video-based human movement analysis holds potential for movement assessment in clinical practice and research. However, the clinical implementation and trust of multi-view markerless motion capture (MMMC) require that, in addition to being accurate, these systems produce reliable confidence intervals to indicate how accurate they are for any individual. Building on our prior work utilizing variational inference to estimate joint angle posterior distributions, this study evaluates the calibration and reliability of a probabilistic MMMC method. We analyzed data from 68 participants across two institutions, validating the model against an instrumented walkway and standard marker-based motion capture. We measured the calibration of the confidence intervals using the Expected Calibration Error (ECE). The model demonstrated reliable calibration, yielding ECE values generally<0.1 for both step and stride length and bias-corrected gait kinematics. We observed a median step and stride length error of ~16 mm and ~12 mm respectively, with median bias-corrected kinematic errors ranging from 1.5 to 3.8 degrees across lower extremity joints. Consistent with the calibrated ECE, the magnitude of the model's predicted uncertainty correlated strongly with observed error measures. These findings indicate that, as designed, the probabilistic model reconstruction quantifies epistemic uncertainty, allowing it to identify unreliable outputs without the need for concurrent ground-truth instrumentation.