Enabling Training-Free Text-Based Remote Sensing Segmentation

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing text-guided remote sensing image segmentation methods, which rely on additional trainable modules that hinder generalization and deployment. The authors propose a novel text-driven segmentation framework that requires no training or only lightweight LoRA fine-tuning, achieving, for the first time, fully training-free open-vocabulary semantic segmentation in remote sensing. The approach integrates vision-language models—specifically CLIP and Qwen-VL—with SAM: CLIP acts as a mask selector, while GPT-4 and a LoRA-finetuned Qwen-VL generate point prompts to guide SAM. The method supports zero-shot referring expression segmentation and reasoning-based segmentation tasks, demonstrating state-of-the-art performance across 19 remote sensing benchmarks spanning open-vocabulary, referring expression, and reasoning segmentation scenarios.

Technology Category

Application Category

📝 Abstract
Recent advances in Vision Language Models (VLMs) and Vision Foundation Models (VFMs) have opened new opportunities for zero-shot text-guided segmentation of remote sensing imagery. However, most existing approaches still rely on additional trainable components, limiting their generalisation and practical applicability. In this work, we investigate to what extent text-based remote sensing segmentation can be achieved without additional training, by relying solely on existing foundation models. We propose a simple yet effective approach that integrates contrastive and generative VLMs with the Segment Anything Model (SAM), enabling a fully training-free or lightweight LoRA-tuned pipeline. Our contrastive approach employs CLIP as mask selector for SAM's grid-based proposals, achieving state-of-the-art open-vocabulary semantic segmentation (OVSS) in a completely zero-shot setting. In parallel, our generative approach enables reasoning and referring segmentation by generating click prompts for SAM using GPT-5 in a zero-shot setting and a LoRA-tuned Qwen-VL model, with the latter yielding the best results. Extensive experiments across 19 remote sensing benchmarks, including open-vocabulary, referring, and reasoning-based tasks, demonstrate the strong capabilities of our approach. Code will be released at https://github.com/josesosajs/trainfree-rs-segmentation.
Problem

Research questions and friction points this paper is trying to address.

training-free
text-based segmentation
remote sensing
zero-shot
foundation models
Innovation

Methods, ideas, or system contributions that make the work stand out.

training-free
zero-shot segmentation
vision-language models
Segment Anything Model
open-vocabulary semantic segmentation
🔎 Similar Papers
2024-09-20IEEE Transactions on Geoscience and Remote SensingCitations: 2