A Modality-Tailored Graph Modeling Framework for Urban Region Representation via Contrastive Learning

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing graph-based methods for multimodal urban data modeling suffer from two key limitations: (1) employing homogeneous GNN architectures across all modalities, thereby neglecting modality-specific characteristics; and (2) applying globally fixed fusion weights during multimodal integration, ignoring spatial heterogeneity. To address these issues, we propose Modality-Customized Graph Learning (MCGL), the first framework to explicitly distinguish between aggregation-level modalities (e.g., POIs) and point-level modalities (e.g., trajectories), and to design dedicated architectures—Mixture-of-Experts GNNs for the former and hierarchical two-level GNNs for the latter. MCGL further introduces a spatially adaptive dynamic fusion mechanism and incorporates multi-level contrastive learning for robust representation alignment. Extensive experiments on two real-world urban datasets, encompassing six modalities and three downstream tasks (traffic forecasting, POI recommendation, and trajectory prediction), demonstrate that MCGL consistently outperforms state-of-the-art methods, validating its superior modeling accuracy and generalization capability.

Technology Category

Application Category

📝 Abstract
Graph-based models have emerged as a powerful paradigm for modeling multimodal urban data and learning region representations for various downstream tasks. However, existing approaches face two major limitations. (1) They typically employ identical graph neural network architectures across all modalities, failing to capture modality-specific structures and characteristics. (2) During the fusion stage, they often neglect spatial heterogeneity by assuming that the aggregation weights of different modalities remain invariant across regions, resulting in suboptimal representations. To address these issues, we propose MTGRR, a modality-tailored graph modeling framework for urban region representation, built upon a multimodal dataset comprising point of interest (POI), taxi mobility, land use, road element, remote sensing, and street view images. (1) MTGRR categorizes modalities into two groups based on spatial density and data characteristics: aggregated-level and point-level modalities. For aggregated-level modalities, MTGRR employs a mixture-of-experts (MoE) graph architecture, where each modality is processed by a dedicated expert GNN to capture distinct modality-specific characteristics. For the point-level modality, a dual-level GNN is constructed to extract fine-grained visual semantic features. (2) To obtain effective region representations under spatial heterogeneity, a spatially-aware multimodal fusion mechanism is designed to dynamically infer region-specific modality fusion weights. Building on this graph modeling framework, MTGRR further employs a joint contrastive learning strategy that integrates region aggregated-level, point-level, and fusion-level objectives to optimize region representations. Experiments on two real-world datasets across six modalities and three tasks demonstrate that MTGRR consistently outperforms state-of-the-art baselines, validating its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Tailors graph architectures for different urban data modalities
Addresses spatial heterogeneity in multimodal fusion weights
Enhances region representation through contrastive learning objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality-tailored graph architectures capture distinct characteristics
Spatially-aware fusion dynamically infers region-specific modality weights
Joint contrastive learning integrates multi-level objectives for optimization
🔎 Similar Papers
No similar papers found.
Y
Yaya Zhao
Center for Applied Statistics, School of Statistics, Innovation Platform, Renmin University of China
Kaiqi Zhao
Kaiqi Zhao
Professor, Harbin Institute of Technology, Shenzhen
Data MiningMachine LearningSpatiotemporal DataGeo-textual Data
Zixuan Tang
Zixuan Tang
Center for Applied Statistics, School of Statistics, Innovation Platform, Renmin University of China
Z
Zhiyuan Liu
Center for Applied Statistics, School of Statistics, Innovation Platform, Renmin University of China
Xiaoling Lu
Xiaoling Lu
Center for Applied Statistics, School of Statistics, Innovation Platform, Renmin University of China
Y
Yalei Du
Beijing Baixingkefu Network Technology Co., Ltd.