🤖 AI Summary
Existing methods struggle to jointly capture the dynamic imbalance between periodic and chaotic patterns in human mobility, while neglecting highly predictable contextual cues such as temporal information. To address this, we propose CANOE—a novel trajectory prediction model. First, it introduces a biologically inspired Chaotic Neural Oscillation Attention mechanism that adaptively balances regularity and stochasticity in mobility behavior. Second, it designs a “Who–When–Where” Tri-Pair Interaction Encoder and a cross-context attention decoder to enable deep multimodal spatiotemporal context fusion. Extensive experiments on multiple real-world trajectory datasets demonstrate that CANOE consistently outperforms state-of-the-art baselines by 3.17%–13.11% in prediction accuracy. Moreover, it exhibits strong robustness across trajectories with varying degrees of chaos, validating its capability to accurately model complex, heterogeneous mobility patterns.
📝 Abstract
Next location prediction is a key task in human mobility analysis, crucial for applications like smart city resource allocation and personalized navigation services. However, existing methods face two significant challenges: first, they fail to address the dynamic imbalance between periodic and chaotic mobile patterns, leading to inadequate adaptation over sparse trajectories; second, they underutilize contextual cues, such as temporal regularities in arrival times, which persist even in chaotic patterns and offer stronger predictability than spatial forecasts due to reduced search spaces. To tackle these challenges, we propose extbf{method}, a underline{ extbf{C}}hunderline{ extbf{A}}otic underline{ extbf{N}}eural underline{ extbf{O}}scillator nunderline{ extbf{E}}twork for next location prediction, which introduces a biologically inspired Chaotic Neural Oscillatory Attention mechanism to inject adaptive variability into traditional attention, enabling balanced representation of evolving mobility behaviors, and employs a Tri-Pair Interaction Encoder along with a Cross Context Attentive Decoder to fuse multimodal ``who-when-where'' contexts in a joint framework for enhanced prediction performance. Extensive experiments on two real-world datasets demonstrate that CANOE consistently and significantly outperforms a sizeable collection of state-of-the-art baselines, yielding 3.17%-13.11% improvement over the best-performing baselines across different cases. In particular, CANOE can make robust predictions over mobility trajectories of different mobility chaotic levels. A series of ablation studies also supports our key design choices. Our code is available at: https://github.com/yuqian2003/CANOE.