π€ AI Summary
Current CLIP-based, training-free open-vocabulary semantic segmentation (OVSS) methods for remote sensing imagery suffer from inaccurate localization of dense small objects and redundant inference pipelines. To address these issues, this paper proposes an end-to-end zero-shot OVSS framework tailored for remote sensing. Its core contributions are threefold: (1) the first integration of SAM 3 into remote sensing OVSS, jointly leveraging its semantic and instance heads for unified feature representation; (2) a novel mask fusion strategy to improve geographical object coverage completeness; and (3) an existence-score filtering mechanism to suppress spurious class predictions and enhance classification accuracy. Crucially, the method requires no training or fine-tuning. Extensive experiments across multiple remote sensing benchmarks demonstrate significant reductions in false detection rates, validating SAM 3βs effectiveness and generalization capability for high-precision, robust open-vocabulary segmentation in complex geospatial scenes.
π Abstract
Most existing methods for training-free Open-Vocabulary Semantic Segmentation (OVSS) are based on CLIP. While these approaches have made progress, they often face challenges in precise localization or require complex pipelines to combine separate modules, especially in remote sensing scenarios where numerous dense and small targets are present. Recently, Segment Anything Model 3 (SAM 3) was proposed, unifying segmentation and recognition in a promptable framework. In this paper, we present a preliminary exploration of applying SAM 3 to the remote sensing OVSS task without any training. First, we implement a mask fusion strategy that combines the outputs from SAM 3's semantic segmentation head and the Transformer decoder (instance head). This allows us to leverage the strengths of both heads for better land coverage. Second, we utilize the presence score from the presence head to filter out categories that do not exist in the scene, reducing false positives caused by the vast vocabulary sizes and patch-level processing in geospatial scenes. We evaluate our method on extensive remote sensing datasets. Experiments show that this simple adaptation achieves promising performance, demonstrating the potential of SAM 3 for remote sensing OVSS. Our code is released at https://github.com/earth-insights/SegEarth-OV-3.