🤖 AI Summary
This study aims to infer biologically plausible neural control strategies from animal kinematic trajectories, enabling interpretable modeling of how nervous systems generate natural behaviors.
Method: We propose an end-to-end differentiable neural control learning framework that integrates deep reinforcement learning, differentiable physics simulation, and a custom-designed neural controller—optimized solely from raw kinematic data to drive high-fidelity biomechanical models.
Contribution/Results: The method achieves cross-species generalizability, high data efficiency, and computational speed. It successfully reconstructs neurobiologically constrained control policies for diverse locomotor behaviors (e.g., walking, jumping), outperforming existing approaches in accuracy, interpretability, and potential for experimental validation. By bridging kinematics, neural dynamics, and biomechanics, our framework establishes a novel paradigm for dissecting motor neural mechanisms and enabling closed-loop behavioral simulation.
📝 Abstract
The primary output of the nervous system is movement and behavior. While recent advances have democratized pose tracking during complex behavior, kinematic trajectories alone provide only indirect access to the underlying control processes. Here we present MIMIC-MJX, a framework for learning biologically-plausible neural control policies from kinematics. MIMIC-MJX models the generative process of motor control by training neural controllers that learn to actuate biomechanically-realistic body models in physics simulation to reproduce real kinematic trajectories. We demonstrate that our implementation is accurate, fast, data-efficient, and generalizable to diverse animal body models. Policies trained with MIMIC-MJX can be utilized to both analyze neural control strategies and simulate behavioral experiments, illustrating its potential as an integrative modeling framework for neuroscience.