🤖 AI Summary
To address the limited language-guided fine-grained segmentation and cross-object reasoning capabilities for complex remote sensing imagery, this paper introduces the first language-guided reasoning segmentation paradigm tailored to remote sensing. Methodologically, we construct GRES—a geospatial reasoning segmentation dataset—and PreGRES, a million-scale pretraining set; design a multimodal vision-language model incorporating remote sensing–specific positional encoding and spectral-aware features; and jointly optimize captioning, visual question answering, and segmentation in an end-to-end manner. Experiments demonstrate that our approach achieves a 10.04% BLEU-4 gain over RS-GPT4V on remote sensing image captioning and improves reasoning segmentation gIoU by 143.36%, significantly outperforming existing open-domain models. This work establishes a novel paradigm and benchmark resources for remote sensing semantic understanding and human–machine collaborative interpretation.
📝 Abstract
Segmentation models can recognize a pre-defined set of objects in images. However, models that can reason over complex user queries that implicitly refer to multiple objects of interest are still in their infancy. Recent advances in reasoning segmentation--generating segmentation masks from complex, implicit query text--demonstrate that vision-language models can operate across an open domain and produce reasonable outputs. However, our experiments show that such models struggle with complex remote-sensing imagery. In this work, we introduce LISAt, a vision-language model designed to describe complex remote-sensing scenes, answer questions about them, and segment objects of interest. We trained LISAt on a new curated geospatial reasoning-segmentation dataset, GRES, with 27,615 annotations over 9,205 images, and a multimodal pretraining dataset, PreGRES, containing over 1 million question-answer pairs. LISAt outperforms existing geospatial foundation models such as RS-GPT4V by over 10.04 % (BLEU-4) on remote-sensing description tasks, and surpasses state-of-the-art open-domain models on reasoning segmentation tasks by 143.36 % (gIoU). Our model, datasets, and code are available at https://lisat-bair.github.io/LISAt/