🤖 AI Summary
To address low accuracy and inefficiency in pedestrian crossing-intention prediction for autonomous driving—caused by image distortion, positional misalignment, and inaccurate environmental modeling—this paper proposes a multimodal spatiotemporal modeling framework. Our method introduces two key innovations: (1) a position-decoupling module that explicitly separates lateral motion and encodes depth cues to rectify visual distortions; and (2) a graph-embedded Transformer that jointly models the coupled dynamics among human pose skeletons, rectified 2D/3D positions, and ego-vehicle motion. By integrating pose estimation, graph neural networks, and Transformer architectures, the framework enables efficient, synergistic multimodal feature learning. Evaluated on PIE and JAAD benchmarks, our approach achieves state-of-the-art accuracy of 92.0% and 87.3%, respectively, with only 0.05 ms inference latency—substantially outperforming existing methods.
📝 Abstract
Understanding and predicting pedestrian crossing behavioral intention is crucial for the driving safety of autonomous vehicles. Nonetheless, challenges emerge when using promising images or environmental context masks to extract various factors for time-series network modeling, causing pre-processing errors or a loss of efficiency. Typically, pedestrian positions captured by onboard cameras are often distorted and do not accurately reflect their actual movements. To address these issues, GTransPDM -- a Graph-embedded Transformer with a Position Decoupling Module -- was developed for pedestrian crossing intention prediction by leveraging multi-modal features. First, a positional decoupling module was proposed to decompose pedestrian lateral motion and encode depth cues in the image view. Then, a graph-embedded Transformer was designed to capture the spatio-temporal dynamics of human pose skeletons, integrating essential factors such as position, skeleton, and ego-vehicle motion. Experimental results indicate that the proposed method achieves 92% accuracy on the PIE dataset and 87% accuracy on the JAAD dataset, with a processing speed of 0.05ms. It outperforms the state-of-the-art in comparison.