🤖 AI Summary
This work proposes a markerless, video-driven gait analysis framework that addresses the limited biomechanical interpretability of conventional keypoint-based methods and their inability to accurately estimate joint kinematics. The approach first reconstructs a 3D human body model from monocular video, then extracts biomechanically meaningful landmarks aligned with those used in motion capture systems, and integrates them into the OpenSim platform for dynamic simulation. To the best of our knowledge, this is the first method to enable markerless extraction of biomechanically interpretable landmarks directly from video for use in OpenSim. The resulting kinematic estimates demonstrate high agreement with the marker-based gold standard in both spatiotemporal and joint-level parameters, significantly outperforming existing pose-estimation-only approaches and thereby enhancing the clinical applicability and accuracy of gait analysis.
📝 Abstract
This paper presents a biomechanically interpretable framework for gait analysis using 3D human reconstruction from video data. Unlike conventional keypoint based approaches, the proposed method extracts biomechanically meaningful markers analogous to motion capture systems and integrates them within OpenSim for joint kinematic estimation. To evaluate performance, both spatiotemporal and kinematic gait parameters were analysed against reference marker-based data. Results indicate strong agreement with marker-based measurements, with considerable improvements when compared with pose-estimation methods alone. The proposed framework offers a scalable, markerless, and interpretable approach for accurate gait assessment, supporting broader clinical and real world deployment of vision based biomechanics