BiomechGPT: Towards a Biomechanically Fluent Multimodal Foundation Model for Clinically Relevant Motion Tasks

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Clinical movement analysis lacks high-quality, markerless biomechanical datasets and general-purpose models tailored for rehabilitation medicine. Method: We introduce BioMotionLM, the first multimodal foundation model for rehabilitation—(1) constructing a 30+ hour, cross-population biomechanical trajectory dataset (including diverse motor disorders) with trajectory tokenization; (2) designing a multimodal Transformer architecture that enables end-to-end alignment between biomechanical trajectories and clinical semantic queries; and (3) releasing a large-scale clinical movement question-answering dataset and performing instruction tuning. Contribution/Results: BioMotionLM significantly outperforms unimodal baselines across five clinically relevant tasks—activity recognition, motor disorder detection, diagnostic inference, clinical scale scoring, and gait quantification—demonstrating strong generalizability, interpretability, and clinical readiness. It establishes a unified foundation model framework for rehabilitation movement analysis.

Technology Category

Application Category

📝 Abstract
Advances in markerless motion capture are expanding access to biomechanical movement analysis, making it feasible to obtain high-quality movement data from outpatient clinics, inpatient hospitals, therapy, and even home. Expanding access to movement data in these diverse contexts makes the challenge of performing downstream analytics all the more acute. Creating separate bespoke analysis code for all the tasks end users might want is both intractable and does not take advantage of the common features of human movement underlying them all. Recent studies have shown that fine-tuning language models to accept tokenized movement as an additional modality enables successful descriptive captioning of movement. Here, we explore whether such a multimodal motion-language model can answer detailed, clinically meaningful questions about movement. We collected over 30 hours of biomechanics from nearly 500 participants, many with movement impairments from a variety of etiologies, performing a range of movements used in clinical outcomes assessments. After tokenizing these movement trajectories, we created a multimodal dataset of motion-related questions and answers spanning a range of tasks. We developed BiomechGPT, a multimodal biomechanics-language model, on this dataset. Our results show that BiomechGPT demonstrates high performance across a range of tasks such as activity recognition, identifying movement impairments, diagnosis, scoring clinical outcomes, and measuring walking. BiomechGPT provides an important step towards a foundation model for rehabilitation movement data.
Problem

Research questions and friction points this paper is trying to address.

Developing a multimodal model for clinical movement analysis
Enabling detailed biomechanical question answering
Creating a foundation model for rehabilitation data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Markerless motion capture for biomechanical analysis
Multimodal motion-language model for clinical questions
Tokenized movement trajectories for diverse clinical tasks
🔎 Similar Papers
No similar papers found.
R
Ruize Yang
Department of Neuroscience, Northwestern University, Shirley Ryan AbilityLab
Ann Kennedy
Ann Kennedy
Associate Professor, The Scripps Research Institute
Neuroscience
R
R. J. Cotton
Department of PM&R, Northwestern University, Shirley Ryan AbilityLab