WalkCLIP: Multimodal Learning for Urban Walkability Prediction

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional walkability assessment relies on costly field surveys or single-source data (e.g., satellite/street-view imagery or population dynamics), failing to jointly capture spatial scale, pedestrian-level perspective, and behavioral semantics. To address this, we propose the first multimodal walkability prediction framework integrating satellite imagery, street-view imagery, and crowd mobility data. Our method leverages CLIP and GPT-4o to generate aligned image–text descriptions for learning pedestrian-perceptual representations, and introduces a spatial aggregation module to jointly model neighborhood context and dynamic population features. Evaluated on 4,660 sampling points across Minneapolis–St. Paul, our model significantly outperforms unimodal and existing bimodal baselines in both predictive accuracy and spatial consistency. It enables a more comprehensive and interpretable quantitative assessment of urban walking environments, bridging perceptual, contextual, and behavioral dimensions of walkability.

Technology Category

Application Category

📝 Abstract
Urban walkability is a cornerstone of public health, sustainability, and quality of life. Traditional walkability assessments rely on surveys and field audits, which are costly and difficult to scale. Recent studies have used satellite imagery, street view imagery, or population indicators to estimate walkability, but these single-source approaches capture only one dimension of the walking environment. Satellite data describe the built environment from above, but overlook the pedestrian perspective. Street view imagery captures conditions at the ground level, but lacks broader spatial context. Population dynamics reveal patterns of human activity but not the visual form of the environment. We introduce WalkCLIP, a multimodal framework that integrates these complementary viewpoints to predict urban walkability. WalkCLIP learns walkability-aware vision-language representations from GPT-4o generated image captions, refines these representations with a spatial aggregation module that incorporates neighborhood context, and fuses the resulting features with representations from a population dynamics foundation model. Evaluated at 4,660 locations throughout Minneapolis-Saint Paul, WalkCLIP outperforms unimodal and multimodal baselines in both predictive accuracy and spatial alignment. These results show that the integration of visual and behavioral signals yields reliable predictions of the walking environment.
Problem

Research questions and friction points this paper is trying to address.

Predicts urban walkability using multimodal data integration
Overcomes single-source limitations in walkability assessment
Combines visual and behavioral signals for accurate predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal framework integrates complementary urban data sources
Learns walkability-aware representations from GPT-4o generated captions
Fuses visual features with population dynamics foundation model
S
Shilong Xiang
University of Minnesota Twin Cities, Minneapolis, MN, USA
J
JangHyeon Lee
University of Minnesota Twin Cities, Minneapolis, MN, USA
M
Min Namgung
University of Minnesota Twin Cities, Minneapolis, MN, USA
Yao-Yi Chiang
Yao-Yi Chiang
Associate Professor, Computer Science & Engineering, University of Minnesota
spatial AIdata miningmachine learninggeographic information sciencecomputer vision