🤖 AI Summary
Existing traffic signal modeling primarily relies on raw numerical sensor data, neglecting semantic information embedded in multimodal urban data (e.g., visual and textual modalities), thereby limiting the understanding and prediction of complex traffic dynamics. To address this, we propose MTP, a novel Multimodal Traffic Prediction framework that pioneers the transformation of traffic signals into three complementary representations: frequency-domain images, periodicity-aware images, and descriptive textual narratives. MTP introduces a frequency-domain hierarchical contrastive learning mechanism to achieve cross-modal semantic alignment and fusion. It jointly integrates vision-enhanced feature extraction, topic-guided text generation, and frequency-domain MLP-based modeling to holistically capture temporal, spectral, and semantic characteristics of traffic signals. Extensive experiments across six real-world datasets demonstrate that MTP consistently outperforms state-of-the-art methods, achieving average MAE reductions of 12.7%–19.3% in multi-step traffic flow forecasting—validating the effectiveness and generalizability of multimodal spectral joint modeling.
📝 Abstract
With rapid urbanization in the modern era, traffic signals from various sensors have been playing a significant role in monitoring the states of cities, which provides a strong foundation in ensuring safe travel, reducing traffic congestion and optimizing urban mobility. Most existing methods for traffic signal modeling often rely on the original data modality, i.e., numerical direct readings from the sensors in cities. However, this unimodal approach overlooks the semantic information existing in multimodal heterogeneous urban data in different perspectives, which hinders a comprehensive understanding of traffic signals and limits the accurate prediction of complex traffic dynamics. To address this problem, we propose a novel extit{M}ultimodal framework, extit{MTP}, for urban extit{T}raffic extit{P}rofiling, which learns multimodal features through numeric, visual, and textual perspectives. The three branches drive for a multimodal perspective of urban traffic signal learning in the frequency domain, while the frequency learning strategies delicately refine the information for extraction. Specifically, we first conduct the visual augmentation for the traffic signals, which transforms the original modality into frequency images and periodicity images for visual learning. Also, we augment descriptive texts for the traffic signals based on the specific topic, background information and item description for textual learning. To complement the numeric information, we utilize frequency multilayer perceptrons for learning on the original modality. We design a hierarchical contrastive learning on the three branches to fuse the spectrum of three modalities. Finally, extensive experiments on six real-world datasets demonstrate superior performance compared with the state-of-the-art approaches.