Your Turn: At Home Turning Angle Estimation for Parkinson's Disease Severity Assessment

📅 2024-08-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing clinical scoring tools fail to capture hour-scale dynamic variations in turning angle among Parkinson’s disease (PD) patients, as they rely on brief, controlled clinical assessments. Method: We propose the first unobtrusive, monocular video-based turning-angle estimation algorithm designed for real-world home environments, overcoming spatiotemporal limitations of clinical evaluation. Our approach integrates FastPose and a Strided Transformer to extract robust 3D skeletal poses; models turning angle via hip–knee joint rotation kinematics; and introduces a 3D pose-driven quantization framework resilient to confounders such as occluding clothing and low illumination. To address the absence of ground truth in home settings, we adopt an expert-defined discrete labeling scheme with 45° granularity. Results: On the Turn-REMAP dataset, our method achieves 41.6% classification accuracy, 34.7° mean absolute error, and 68.3% weighted accuracy—demonstrating feasibility of fine-grained, at-home monitoring of PD progression.

Technology Category

Application Category

📝 Abstract
People with Parkinson's Disease (PD) often experience progressively worsening gait, including changes in how they turn around, as the disease progresses. Existing clinical rating tools are not capable of capturing hour-by-hour variations of PD symptoms, as they are confined to brief assessments within clinic settings. Measuring gait turning angles continuously and passively is a component step towards using gait characteristics as sensitive indicators of disease progression in PD. This paper presents a deep learning-based approach to automatically quantify turning angles by extracting 3D skeletons from videos and calculating the rotation of hip and knee joints. We utilise state-of-the-art human pose estimation models, Fastpose and Strided Transformer, on a total of 1386 turning video clips from 24 subjects (12 people with PD and 12 healthy control volunteers), trimmed from a PD dataset of unscripted free-living videos in a home-like setting (Turn-REMAP). We also curate a turning video dataset, Turn-H3.6M, from the public Human3.6M human pose benchmark with 3D ground truth, to further validate our method. Previous gait research has primarily taken place in clinics or laboratories evaluating scripted gait outcomes, but this work focuses on free-living home settings where complexities exist, such as baggy clothing and poor lighting. Due to difficulties in obtaining accurate ground truth data in a free-living setting, we quantise the angle into the nearest bin $45^circ$ based on the manual labelling of expert clinicians. Our method achieves a turning calculation accuracy of 41.6%, a Mean Absolute Error (MAE) of 34.7{deg}, and a weighted precision WPrec of 68.3% for Turn-REMAP. This is the first work to explore the use of single monocular camera data to quantify turns by PD patients in a home setting.
Problem

Research questions and friction points this paper is trying to address.

Estimating turning angles at home for Parkinson's Disease assessment
Overcoming clinic limitations with continuous passive gait monitoring
Validating deep learning on free-living videos for accurate turn quantification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep learning estimates 3D skeletons from videos
Utilizes Fastpose and Strided Transformer models
Quantifies turning angles in home settings
🔎 Similar Papers
No similar papers found.
Q
Qiushuo Cheng
Faculty of Engineering, University of Bristol, UK
C
Catherine Morgan
Translational Health Sciences, University of Bristol, UK; North Bristol NHS Trust, Southmead Hospital, Bristol, UK
Arindam Sikdar
Arindam Sikdar
Faculty of Engineering, University of Bristol, UK
A
A. Masullo
Faculty of Engineering, University of Bristol, UK
Alan Whone
Alan Whone
Unknown affiliation
Majid Mirmehdi
Majid Mirmehdi
Professor of Computer Vision, FIAPR, FBMVA, University of Bristol
Computer Vision and Pattern Recognition