🤖 AI Summary
Pedestrian crossing intention prediction is critical for mitigating urban collision risks in autonomous vehicles, yet its accuracy is limited by behavioral diversity and context dependency. To address this, we propose a multimodal deep-guided attention network: (1) depth information is incorporated to enhance cross-modal alignment; (2) a dual-attention mechanism—comprising modality-wise and temporal attention—is designed to enable adaptive fusion of seven visual and motion modalities and effective modeling of critical temporal dynamics; and (3) the overall architecture leverages Transformer-based spatiotemporal feature extraction. Evaluated on the JAAD dataset, our method significantly outperforms existing baselines in accuracy, robustness, and generalization to complex scenarios. Results validate both the effectiveness and novelty of the depth-guided multimodal attention mechanism for pedestrian intention prediction.
📝 Abstract
Pedestrian crossing intention prediction is essential for the deployment of autonomous vehicles (AVs) in urban environments. Ideal prediction provides AVs with critical environmental cues, thereby reducing the risk of pedestrian-related collisions. However, the prediction task is challenging due to the diverse nature of pedestrian behavior and its dependence on multiple contextual factors. This paper proposes a multimodal fusion network that leverages seven modality features from both visual and motion branches, aiming to effectively extract and integrate complementary cues across different modalities. Specifically, motion and visual features are extracted from the raw inputs using multiple Transformer-based extraction modules. Depth-guided attention module leverages depth information to guide attention towards salient regions in another modality through comprehensive spatial feature interactions. To account for the varying importance of different modalities and frames, modality attention and temporal attention are designed to selectively emphasize informative modalities and effectively capture temporal dependencies. Extensive experiments on the JAAD dataset validate the effectiveness of the proposed network, achieving superior performance compared to the baseline methods.