🤖 AI Summary
To address the challenges of unseen class recognition, large-scale and multi-orientation variations, scene complexity, and high cost of pixel-level annotation in open-vocabulary segmentation (OVS) for remote sensing imagery, this paper proposes the first OVS framework specifically designed for remote sensing. Our method innovatively integrates multi-rotational image–text correlation modeling, SAM-guided spatial refinement, semantic back-projection, and an attention-aware multi-scale decoder, augmented by domain-specific text prompts, multi-view feature alignment, and a semantic consistency loss. Evaluated on three major remote sensing benchmarks—iSAID, DLRSD, and OpenEarthMap—our framework achieves a 2.54% average harmonic mean IoU (h-mIoU) improvement over state-of-the-art methods. It effectively overcomes the limited generalizability and heavy annotation dependency inherent in conventional supervised segmentation models.
📝 Abstract
Image segmentation beyond predefined categories is a key challenge in remote sensing, where novel and unseen classes often emerge during inference. Open-vocabulary image Segmentation addresses these generalization issues in traditional supervised segmentation models while reducing reliance on extensive per-pixel annotations, which are both expensive and labor-intensive to obtain. Most Open-Vocabulary Segmentation (OVS) methods are designed for natural images but struggle with remote sensing data due to scale variations, orientation changes, and complex scene compositions. This necessitates the development of OVS approaches specifically tailored for remote sensing. In this context, we propose AerOSeg, a novel OVS approach for remote sensing data. First, we compute robust image-text correlation features using multiple rotated versions of the input image and domain-specific prompts. These features are then refined through spatial and class refinement blocks. Inspired by the success of the Segment Anything Model (SAM) in diverse domains, we leverage SAM features to guide the spatial refinement of correlation features. Additionally, we introduce a semantic back-projection module and loss to ensure the seamless propagation of SAM's semantic information throughout the segmentation pipeline. Finally, we enhance the refined correlation features using a multi-scale attention-aware decoder to produce the final segmentation map. We validate our SAM-guided Open-Vocabulary Remote Sensing Segmentation model on three benchmark remote sensing datasets: iSAID, DLRSD, and OpenEarthMap. Our model outperforms state-of-the-art open-vocabulary segmentation methods, achieving an average improvement of 2.54 h-mIoU.