Urban Safety Perception Assessments via Integrating Multimodal Large Language Models with Street View Images

📅 2024-07-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Traditional urban safety perception assessment relies on labor-intensive manual surveys, suffering from high costs, lengthy deployment, strong subjectivity, and poor cross-city generalizability. To address these limitations, we propose a training-free, transferable, fully automated evaluation framework: (1) it pioneers the use of multimodal large language models (e.g., GPT-4) for safety perception ranking of street-view imagery; (2) it integrates CLIP-based vision–language embeddings with K-nearest neighbors (K-NN) retrieval to eliminate dependence on large-scale human annotations; and (3) it generates city-level safety indices without model fine-tuning. Evaluated on human-annotated anchor sets, our method achieves strong agreement with ground-truth perceptual judgments (Spearman’s ρ > 0.85), outperforming supervised deep learning approaches requiring extensive labeled data. The framework significantly improves assessment efficiency, scalability, and cross-city generalization capability.

Technology Category

Application Category

📝 Abstract
Measuring urban safety perception is an important and complex task that traditionally relies heavily on human resources. This process often involves extensive field surveys, manual data collection, and subjective assessments, which can be time-consuming, costly, and sometimes inconsistent. Street View Images (SVIs), along with deep learning methods, provide a way to realize large-scale urban safety detection. However, achieving this goal often requires extensive human annotation to train safety ranking models, and the architectural differences between cities hinder the transferability of these models. Thus, a fully automated method for conducting safety evaluations is essential. Recent advances in multimodal large language models (MLLMs) have demonstrated powerful reasoning and analytical capabilities. Cutting-edge models, e.g., GPT-4 have shown surprising performance in many tasks. We employed these models for urban safety ranking on a human-annotated anchor set and validated that the results from MLLMs align closely with human perceptions. Additionally, we proposed a method based on the pre-trained Contrastive Language-Image Pre-training (CLIP) feature and K-Nearest Neighbors (K-NN) retrieval to quickly assess the safety index of the entire city. Experimental results show that our method outperforms existing training needed deep learning approaches, achieving efficient and accurate urban safety evaluations. The proposed automation for urban safety perception assessment is a valuable tool for city planners, policymakers, and researchers aiming to improve urban environments.
Problem

Research questions and friction points this paper is trying to address.

Automating urban safety perception using multimodal models
Reducing reliance on manual data collection methods
Enhancing transferability of safety models across cities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates MLLMs for urban safety ranking
Uses CLIP and K-NN for safety index
Automates evaluations without human annotation
🔎 Similar Papers
No similar papers found.
J
Jiaxin Zhang
Architecture and design college, Nanchang University, No. 999, Xuefu Avenue, Honggutan New District, Nanchang, 330031, China
Y
Yunqin Li
Architecture and design college, Nanchang University, No. 999, Xuefu Avenue, Honggutan New District, Nanchang, 330031, China
Tomohiro Fukuda
Tomohiro Fukuda
Division of Sustainable Energy and Environmental Engineering, Osaka University, 2-1,Yamadaoka, Osaka, 5650871, Suita, Japan
B
Bowen Wang
Institute of Datability Science, Osaka University, 2-1,Yamadaoka, Osaka, 5650871, Suita, Japan