🤖 AI Summary
This work addresses the insufficient coupling between system-level collective behavior and individual dynamics in multi-agent collaborative time-series modeling. We propose a hierarchical recursive switching state model, featuring a two-layer hidden Markov–recurrent coupled architecture that enables context-aware bottom-up and structure-driven top-down latent-state interactions—marking the first explicit characterization of topological influence of group dynamics on individual trajectories. Learning is performed via variational coordinate ascent, ensuring linear scalability in unsupervised training. Empirically, our model matches the predictive accuracy of large neural networks on basketball and military coordination datasets, while employing orders-of-magnitude fewer parameters and exhibiting linear training cost growth with respect to agent count. Furthermore, it successfully uncovers phased collaborative patterns in a synthetic 64-agent task, demonstrating interpretability and scalability in complex multi-agent dynamics.
📝 Abstract
We seek a computationally efficient model for a collection of time series arising from multiple interacting entities (a.k.a."agents"). Recent models of spatiotemporal patterns across individuals fail to incorporate explicit system-level collective behavior that can influence the trajectories of individual entities. To address this gap in the literature, we present a new hierarchical switching-state model that can be trained in an unsupervised fashion to simultaneously learn both system-level and individual-level dynamics. We employ a latent system-level discrete state Markov chain that provides top-down influence on latent entity-level chains which in turn govern the emission of each observed time series. Recurrent feedback from the observations to the latent chains at both entity and system levels allows recent situational context to inform how dynamics unfold at all levels in bottom-up fashion. We hypothesize that including both top-down and bottom-up influences on group dynamics will improve interpretability of the learned dynamics and reduce error when forecasting. Our hierarchical switching recurrent dynamical model can be learned via closed-form variational coordinate ascent updates to all latent chains that scale linearly in the number of entities. This is asymptotically no more costly than fitting a separate model for each entity. Analysis of both synthetic data and real basketball team movements suggests our lean parametric model can achieve competitive forecasts compared to larger neural network models that require far more computational resources. Further experiments on soldier data as well as a synthetic task with 64 cooperating entities show how our approach can yield interpretable insights about team dynamics over time.