Nonlinear action prediction models reveal multi-timescale locomotor control

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing gait control models predominantly rely on linear, single-timescale assumptions and are extensively validated in controlled laboratory settings, yet exhibit limited generalizability to real-world, complex environments. Method: We propose a nonlinear, multi-timescale foot-placement prediction model that jointly encodes biomechanical states and eye-movement signals. The architecture integrates GRUs and Transformers to capture flexible temporal dependencies across heterogeneous input modalities. Contribution/Results: Our model is the first to empirically reveal that gait control exhibits context-dependent (e.g., terrain complexity) and modality-dependent (e.g., walking vs. running; treadmill vs. overground) temporal structure: complex terrains engage faster-timescale predictions, and visual information precedes body-state cues in predictive control. Experiments demonstrate significant performance gains over conventional linear models across diverse locomotor contexts and sensor modalities, strong cross-context generalization, and—critically—the first quantitative characterization of the central nervous system’s temporal hierarchy in locomotor control.

Technology Category

Application Category

📝 Abstract
Modeling movement in real-world tasks is a fundamental scientific goal. However, it is unclear whether existing models and their assumptions, overwhelmingly tested in laboratory-constrained settings, generalize to the real world. For example, data-driven models of foot placement control -- a crucial action for stable locomotion -- assume linear and single timescale mappings. We develop nonlinear foot placement prediction models, finding that neural network architectures with flexible input history-dependence like GRU and Transformer perform best across multiple contexts (walking and running, treadmill and overground, varying terrains) and input modalities (multiple body states, gaze), outperforming traditional models. These models reveal context- and modality-dependent timescales: there is more reliance on fast-timescale predictions in complex terrain, gaze predictions precede body state predictions, and full-body state predictions precede center-of-mass-relevant predictions. Thus, nonlinear action prediction models provide quantifiable insights into real-world motor control and can be extended to other actions, contexts, and populations.
Problem

Research questions and friction points this paper is trying to address.

Develop nonlinear models for foot placement prediction in locomotion.
Test models in real-world contexts like walking, running, and varied terrains.
Reveal context- and modality-dependent timescales in motor control.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nonlinear models outperform traditional linear models.
GRU and Transformer architectures handle diverse contexts.
Models reveal context-dependent timescales in motor control.
🔎 Similar Papers
No similar papers found.
Wei-Chen Wang
Wei-Chen Wang
Co-Founder, Eigen AI
Deep LearningTinyMLEmbedded SystemsMemory/Storage SystemsOperating Systems
A
Antoine De Comite
Massachusetts Institute of Technology
M
Monica Daley
University of California, Irvine
A
Alexandra Voloshina
University of California, Irvine
N
Nidhi Seethapathi
Massachusetts Institute of Technology