Pedestrian Crossing Intention Prediction Using Multimodal Fusion Network

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pedestrian crossing intention prediction is critical for mitigating urban collision risks in autonomous vehicles, yet its accuracy is limited by behavioral diversity and context dependency. To address this, we propose a multimodal deep-guided attention network: (1) depth information is incorporated to enhance cross-modal alignment; (2) a dual-attention mechanism—comprising modality-wise and temporal attention—is designed to enable adaptive fusion of seven visual and motion modalities and effective modeling of critical temporal dynamics; and (3) the overall architecture leverages Transformer-based spatiotemporal feature extraction. Evaluated on the JAAD dataset, our method significantly outperforms existing baselines in accuracy, robustness, and generalization to complex scenarios. Results validate both the effectiveness and novelty of the depth-guided multimodal attention mechanism for pedestrian intention prediction.

Technology Category

Application Category

📝 Abstract
Pedestrian crossing intention prediction is essential for the deployment of autonomous vehicles (AVs) in urban environments. Ideal prediction provides AVs with critical environmental cues, thereby reducing the risk of pedestrian-related collisions. However, the prediction task is challenging due to the diverse nature of pedestrian behavior and its dependence on multiple contextual factors. This paper proposes a multimodal fusion network that leverages seven modality features from both visual and motion branches, aiming to effectively extract and integrate complementary cues across different modalities. Specifically, motion and visual features are extracted from the raw inputs using multiple Transformer-based extraction modules. Depth-guided attention module leverages depth information to guide attention towards salient regions in another modality through comprehensive spatial feature interactions. To account for the varying importance of different modalities and frames, modality attention and temporal attention are designed to selectively emphasize informative modalities and effectively capture temporal dependencies. Extensive experiments on the JAAD dataset validate the effectiveness of the proposed network, achieving superior performance compared to the baseline methods.
Problem

Research questions and friction points this paper is trying to address.

Predict pedestrian crossing intention using multimodal fusion network
Address diverse pedestrian behavior and contextual dependencies
Integrate visual and motion features with attention mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal fusion network integrating seven modality features
Depth-guided attention module enhancing spatial feature interactions
Modality and temporal attention mechanisms emphasizing informative cues
🔎 Similar Papers
No similar papers found.
Y
Yuanzhe Li
Chair of Automotive Engineering, Technische Universität Berlin, Gustav-Meyer-Allee 25, 13355 Berlin, Germany
Steffen Müller
Steffen Müller
Professor für Kraftfahrzeugtechnik, TU Berlin
Kraftfahrzeugtechnik