🤖 AI Summary
This work addresses the challenge of high-fidelity modeling and optimization of multi-agent animal behavior under unknown real-world biomechanical dynamics. We propose a deep reinforcement learning framework integrating data-driven simulation with counterfactual reasoning. Methodologically, we explicitly encode motion variables from incomplete dynamical models as RL action space components and introduce a trajectory-distance-based pseudo-reward mechanism, enabling stable training without prior dynamical knowledge. Further, we unify offline/online RL, imitation learning, state alignment, and counterfactual inference to support cross-species behavioral replication (fruit fly, salamander, silkworm moth) and counterfactual trajectory generation. Experiments demonstrate significant improvements: 32.7% reduction in trajectory reconstruction error and 2.1× acceleration in reward convergence. The framework establishes a novel, interpretable, and intervention-capable simulation paradigm for computational neuroethology and embodied AI.
📝 Abstract
Simulators of animal movements play a valuable role in studying behavior. Advances in imitation learning for robotics have expanded possibilities for reproducing human and animal movements. A key challenge for realistic multi-animal simulation in biology is bridging the gap between unknown real-world transition models and their simulated counterparts. Because locomotion dynamics are seldom known, relying solely on mathematical models is insufficient; constructing a simulator that both reproduces real trajectories and supports reward-driven optimization remains an open problem. We introduce a data-driven simulator for multi-animal behavior based on deep reinforcement learning and counterfactual simulation. We address the ill-posed nature of the problem caused by high degrees of freedom in locomotion by estimating movement variables of an incomplete transition model as actions within an RL framework. We also employ a distance-based pseudo-reward to align and compare states between cyber and physical spaces. Validated on artificial agents, flies, newts, and silkmoth, our approach achieves higher reproducibility of species-specific behaviors and improved reward acquisition compared with standard imitation and RL methods. Moreover, it enables counterfactual behavior prediction in novel experimental settings and supports multi-individual modeling for flexible what-if trajectory generation, suggesting its potential to simulate and elucidate complex multi-animal behaviors.