🤖 AI Summary
Remote sensing image referring expression segmentation (RRSIS) suffers from a severe scarcity of high-quality referring expression annotations, exacerbated by dense small objects and complex backgrounds. To address this, we propose Weak Referring Expression Learning (WREL), a novel paradigm that leverages class names as weak supervision signals, thereby reducing reliance on fine-grained textual annotations. Methodologically, WREL introduces a Learnable Reference Bank (LRB) to enhance cross-modal alignment, employs sample-specific prompt embeddings and a dynamically scheduled EMA-based teacher-student framework to improve generalization, and incorporates a hybrid referring training strategy. We theoretically derive an upper bound on WREL’s performance. Experiments on a newly constructed benchmark demonstrate that WREL achieves performance comparable to—or even surpassing—that of fully supervised baselines using only 10%–30% weakly supervised data, significantly lowering annotation costs.
📝 Abstract
Referring Remote Sensing Image Segmentation (RRSIS) aims to segment instances in remote sensing images according to referring expressions. Unlike Referring Image Segmentation on general images, acquiring high-quality referring expressions in the remote sensing domain is particularly challenging due to the prevalence of small, densely distributed objects and complex backgrounds. This paper introduces a new learning paradigm, Weakly Referring Expression Learning (WREL) for RRSIS, which leverages abundant class names as weakly referring expressions together with a small set of accurate ones to enable efficient training under limited annotation conditions. Furthermore, we provide a theoretical analysis showing that mixed-referring training yields a provable upper bound on the performance gap relative to training with fully annotated referring expressions, thereby establishing the validity of this new setting. We also propose LRB-WREL, which integrates a Learnable Reference Bank (LRB) to refine weakly referring expressions through sample-specific prompt embeddings that enrich coarse class-name inputs. Combined with a teacher-student optimization framework using dynamically scheduled EMA updates, LRB-WREL stabilizes training and enhances cross-modal generalization under noisy weakly referring supervision. Extensive experiments on our newly constructed benchmark with varying weakly referring data ratios validate both the theoretical insights and the practical effectiveness of WREL and LRB-WREL, demonstrating that they can approach or even surpass models trained with fully annotated referring expressions.