🤖 AI Summary
Clinical movement analysis lacks high-quality, markerless biomechanical datasets and general-purpose models tailored for rehabilitation medicine. Method: We introduce BioMotionLM, the first multimodal foundation model for rehabilitation—(1) constructing a 30+ hour, cross-population biomechanical trajectory dataset (including diverse motor disorders) with trajectory tokenization; (2) designing a multimodal Transformer architecture that enables end-to-end alignment between biomechanical trajectories and clinical semantic queries; and (3) releasing a large-scale clinical movement question-answering dataset and performing instruction tuning. Contribution/Results: BioMotionLM significantly outperforms unimodal baselines across five clinically relevant tasks—activity recognition, motor disorder detection, diagnostic inference, clinical scale scoring, and gait quantification—demonstrating strong generalizability, interpretability, and clinical readiness. It establishes a unified foundation model framework for rehabilitation movement analysis.
📝 Abstract
Advances in markerless motion capture are expanding access to biomechanical movement analysis, making it feasible to obtain high-quality movement data from outpatient clinics, inpatient hospitals, therapy, and even home. Expanding access to movement data in these diverse contexts makes the challenge of performing downstream analytics all the more acute. Creating separate bespoke analysis code for all the tasks end users might want is both intractable and does not take advantage of the common features of human movement underlying them all. Recent studies have shown that fine-tuning language models to accept tokenized movement as an additional modality enables successful descriptive captioning of movement. Here, we explore whether such a multimodal motion-language model can answer detailed, clinically meaningful questions about movement. We collected over 30 hours of biomechanics from nearly 500 participants, many with movement impairments from a variety of etiologies, performing a range of movements used in clinical outcomes assessments. After tokenizing these movement trajectories, we created a multimodal dataset of motion-related questions and answers spanning a range of tasks. We developed BiomechGPT, a multimodal biomechanics-language model, on this dataset. Our results show that BiomechGPT demonstrates high performance across a range of tasks such as activity recognition, identifying movement impairments, diagnosis, scoring clinical outcomes, and measuring walking. BiomechGPT provides an important step towards a foundation model for rehabilitation movement data.