CityLens: Benchmarking Large Language-Vision Models for Urban Socioeconomic Sensing

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses socioeconomic indicator forecasting in urban environments by introducing CityLens—the first multimodal urban sensing benchmark. It spans 17 global cities, six domains, and 11 prediction tasks, integrating satellite imagery, street-view images, and authoritative socioeconomic data. Methodologically, it establishes the first city-level vision–socioeconomics alignment benchmark, proposing three evaluation paradigms: direct prediction, normalized estimation, and feature regression; and develops a zero-shot/fine-tuning evaluation framework alongside a cross-city generalization analysis protocol. Comprehensive experiments benchmark 17 state-of-the-art vision-language models, revealing significantly weaker performance on abstract dimensions (e.g., education, health) versus concrete ones (e.g., economy, transportation), and exposing systematic limitations in low-causality and long-tailed distribution scenarios. All data, code, and evaluation tooling are publicly released.

Technology Category

Application Category

📝 Abstract
Understanding urban socioeconomic conditions through visual data is a challenging yet essential task for sustainable urban development and policy planning. In this work, we introduce $ extbf{CityLens}$, a comprehensive benchmark designed to evaluate the capabilities of large language-vision models (LLVMs) in predicting socioeconomic indicators from satellite and street view imagery. We construct a multi-modal dataset covering a total of 17 globally distributed cities, spanning 6 key domains: economy, education, crime, transport, health, and environment, reflecting the multifaceted nature of urban life. Based on this dataset, we define 11 prediction tasks and utilize three evaluation paradigms: Direct Metric Prediction, Normalized Metric Estimation, and Feature-Based Regression. We benchmark 17 state-of-the-art LLVMs across these tasks. Our results reveal that while LLVMs demonstrate promising perceptual and reasoning capabilities, they still exhibit limitations in predicting urban socioeconomic indicators. CityLens provides a unified framework for diagnosing these limitations and guiding future efforts in using LLVMs to understand and predict urban socioeconomic patterns. Our codes and datasets are open-sourced via https://github.com/tsinghua-fib-lab/CityLens.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLVMs for urban socioeconomic sensing from imagery
Assessing model performance across 17 global cities and 6 domains
Diagnosing limitations in predicting socioeconomic indicators via multi-modal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal dataset covering 17 global cities
11 prediction tasks with 3 evaluation paradigms
Benchmarked 17 state-of-the-art LLVMs
🔎 Similar Papers
No similar papers found.
Tianhui Liu
Tianhui Liu
Hong Kong University of Science and Technology (Guangzhou), Tsinghua University
Large Language ModelUrban ScienceSpatial Intelligence
J
Jie Feng
Department of Electronic Engineering, BRNist, Tsinghua University, Beijing, China
H
Hetian Pang
Department of Electronic Engineering, BRNist, Tsinghua University, Beijing, China
X
Xin Zhang
Department of Electronic Engineering, BRNist, Tsinghua University, Beijing, China
Tianjian Ouyang
Tianjian Ouyang
Tsinghua University
Z
Zhiyuan Zhang
School of Electronic and Information Engineering, Beijing Jiaotong University
Y
Yong Li
Department of Electronic Engineering, BRNist, Tsinghua University, Beijing, China