GeoVision Labeler: Zero-Shot Geospatial Classification with Vision and Language Models

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Geospatial image zero-shot classification remains challenging under label-scarce scenarios, such as disaster response. This paper introduces the first rigorously zero-shot, modular, and interpretable remote sensing classification framework: it requires neither labeled data nor task-specific fine-tuning—first leveraging a vision large language model (vLLM) to generate descriptive image captions, then employing a large language model (LLM) to map these descriptions onto user-defined semantic categories. A novel recursive LLM-driven hierarchical semantic clustering and meta-class construction mechanism is proposed to support complex multi-class reasoning. Evaluated on the SpaceNet v7 “building/non-building” binary task, the framework achieves 93.2% zero-shot accuracy. It further attains state-of-the-art zero-shot performance on multi-class benchmarks UC Merced and RESISC45, demonstrating robust generalization across diverse geospatial domains without any supervision.

Technology Category

Application Category

📝 Abstract
Classifying geospatial imagery remains a major bottleneck for applications such as disaster response and land-use monitoring-particularly in regions where annotated data is scarce or unavailable. Existing tools (e.g., RS-CLIP) that claim zero-shot classification capabilities for satellite imagery nonetheless rely on task-specific pretraining and adaptation to reach competitive performance. We introduce GeoVision Labeler (GVL), a strictly zero-shot classification framework: a vision Large Language Model (vLLM) generates rich, human-readable image descriptions, which are then mapped to user-defined classes by a conventional Large Language Model (LLM). This modular, and interpretable pipeline enables flexible image classification for a large range of use cases. We evaluated GVL across three benchmarks-SpaceNet v7, UC Merced, and RESISC45. It achieves up to 93.2% zero-shot accuracy on the binary Buildings vs. No Buildings task on SpaceNet v7. For complex multi-class classification tasks (UC Merced, RESISC45), we implemented a recursive LLM-driven clustering to form meta-classes at successive depths, followed by hierarchical classification-first resolving coarse groups, then finer distinctions-to deliver competitive zero-shot performance. GVL is open-sourced at https://github.com/microsoft/geo-vision-labeler to catalyze adoption in real-world geospatial workflows.
Problem

Research questions and friction points this paper is trying to address.

Classifying geospatial imagery without annotated data
Zero-shot classification for disaster response and land-use monitoring
Modular framework combining vision and language models for flexible classification
Innovation

Methods, ideas, or system contributions that make the work stand out.

vLLM generates human-readable image descriptions
LLM maps descriptions to user-defined classes
Recursive LLM-driven clustering for hierarchical classification
🔎 Similar Papers
No similar papers found.