GTransPDM: A Graph-embedded Transformer with Positional Decoupling for Pedestrian Crossing Intention Prediction

📅 2024-09-30
🏛️ IEEE Signal Processing Letters
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low accuracy and inefficiency in pedestrian crossing-intention prediction for autonomous driving—caused by image distortion, positional misalignment, and inaccurate environmental modeling—this paper proposes a multimodal spatiotemporal modeling framework. Our method introduces two key innovations: (1) a position-decoupling module that explicitly separates lateral motion and encodes depth cues to rectify visual distortions; and (2) a graph-embedded Transformer that jointly models the coupled dynamics among human pose skeletons, rectified 2D/3D positions, and ego-vehicle motion. By integrating pose estimation, graph neural networks, and Transformer architectures, the framework enables efficient, synergistic multimodal feature learning. Evaluated on PIE and JAAD benchmarks, our approach achieves state-of-the-art accuracy of 92.0% and 87.3%, respectively, with only 0.05 ms inference latency—substantially outperforming existing methods.

Technology Category

Application Category

📝 Abstract
Understanding and predicting pedestrian crossing behavioral intention is crucial for the driving safety of autonomous vehicles. Nonetheless, challenges emerge when using promising images or environmental context masks to extract various factors for time-series network modeling, causing pre-processing errors or a loss of efficiency. Typically, pedestrian positions captured by onboard cameras are often distorted and do not accurately reflect their actual movements. To address these issues, GTransPDM -- a Graph-embedded Transformer with a Position Decoupling Module -- was developed for pedestrian crossing intention prediction by leveraging multi-modal features. First, a positional decoupling module was proposed to decompose pedestrian lateral motion and encode depth cues in the image view. Then, a graph-embedded Transformer was designed to capture the spatio-temporal dynamics of human pose skeletons, integrating essential factors such as position, skeleton, and ego-vehicle motion. Experimental results indicate that the proposed method achieves 92% accuracy on the PIE dataset and 87% accuracy on the JAAD dataset, with a processing speed of 0.05ms. It outperforms the state-of-the-art in comparison.
Problem

Research questions and friction points this paper is trying to address.

Predict pedestrian crossing intention accurately for autonomous vehicles
Address distorted pedestrian positions from onboard cameras
Improve efficiency in extracting multi-modal features for prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Positional decoupling module for motion decomposition
Graph-embedded Transformer for spatio-temporal dynamics
Multi-modal integration of position, skeleton, and motion
🔎 Similar Papers
No similar papers found.
Chen Xie
Chen Xie
Politecnico di Torino
Synthesis of smart sensors
C
Ciyun Lin
Department of Traffic Information and Control Engineering, Jilin University, Changchun 130022, China
Xiaoyu Zheng
Xiaoyu Zheng
DERI-Queen Mary University of London
B
Bowen Gong
Department of Traffic Information and Control Engineering, Jilin University, Changchun 130022, China
D
Dayong Wu
Texas A&M Transportation Institute, Texas A&M University, 75251, USA
A
Antonio M. L'opez
Computer Vision Center (CVC), Computer Science Department, Universitat Aut'onoma de Barcelona (UAB), 08193 Bellaterra, Spain