🤖 AI Summary
This study addresses the challenge of modeling human motion priors in urban scenes to support crowd flow analysis and robot co-navigation. Conventional CNNs struggle to capture long-range spatial dependencies, limiting their ability to model cross-regional movement patterns. To overcome this, we propose the first end-to-end spatiotemporal modeling framework for human motion based on Vision Transformers (ViT). Our method jointly encodes trajectory sequences and positional embeddings to explicitly represent inter-regional motion correlations via global self-attention. The architecture simultaneously predicts frequently traversed paths, velocity distributions, and stationary regions. Evaluated on standard benchmark datasets, our approach significantly outperforms CNN-based baselines across key metrics—including path prediction accuracy, velocity estimation error, and stop-region localization—demonstrating the superiority of global attention for learning motion priors. This work establishes a novel paradigm for understanding collective human behavior and enabling embodied intelligent navigation in dynamic urban environments.
📝 Abstract
A clear understanding of where humans move in a scenario, their usual paths and speeds, and where they stop, is very important for different applications, such as mobility studies in urban areas or robot navigation tasks within human-populated environments. We propose in this article, a neural architecture based on Vision Transformers (ViTs) to provide this information. This solution can arguably capture spatial correlations more effectively than Convolutional Neural Networks (CNNs). In the paper, we describe the methodology and proposed neural architecture and show the experiments' results with a standard dataset. We show that the proposed ViT architecture improves the metrics compared to a method based on a CNN.