🤖 AI Summary
Prechtl’s General Movement Assessment (GMA) for early neurodevelopmental disorder screening in neonates is hindered by its reliance on scarce expert raters and poor scalability. To address this, we present the first end-to-end automated GM quality classification system validated in uncontrolled clinical settings—characterized by heterogeneous video acquisition devices, variable clip durations, and only coarse-grained labels. Our method introduces a robust video feature extraction framework integrating I3D-based spatiotemporal modeling, self-supervised pretraining, and temporal pooling, coupled with a weakly supervised learning strategy tailored to sparse annotations. Evaluated on a multi-device real-world infant video dataset, our approach achieves 89.2% accuracy—significantly outperforming existing baselines. This enables low-cost, high-throughput community-level screening and establishes a scalable, clinically deployable pathway for generalized movement assessment.
📝 Abstract
General movements (GMs) are spontaneous, coordinated body movements in infants that offer valuable insights into the developing nervous system. Assessed through the Prechtl GM Assessment (GMA), GMs are reliable predictors for neurodevelopmental disorders. However, GMA requires specifically trained clinicians, who are limited in number. To scale up newborn screening, there is a need for an algorithm that can automatically classify GMs from infant video recordings. This data poses challenges, including variability in recording length, device type, and setting, with each video coarsely annotated for overall movement quality. In this work, we introduce a tool for extracting features from these recordings and explore various machine learning techniques for automated GM classification.