🤖 AI Summary
Remote sensing image understanding faces two key challenges: (1) generic text prompts often fail to precisely localize user-specific regions of interest; and (2) high inter-class similarity and complex spatial relationships hinder accurate object recognition and description. To address these, we propose the first vision-prompt-driven multimodal remote sensing understanding framework—capable of jointly generating high-fidelity segmentation masks and semantically coherent textual descriptions. Our core contributions include: a context-aware mask decoder; a cross-modal semantic-relation alignment module; and an integrated learning strategy combining vision prompt guidance, cross-modal contrastive learning, relational graph modeling, and a dual consistency loss (semantic + relational). Evaluated on two established remote sensing benchmarks, our method achieves state-of-the-art performance, significantly improving both intent-aligned segmentation accuracy and descriptive fidelity.
📝 Abstract
Recent advances in image understanding have enabled methods that leverage large language models for multimodal reasoning in remote sensing. However, existing approaches still struggle to steer models to the user-relevant regions when only simple, generic text prompts are available. Moreover, in large-scale aerial imagery many objects exhibit highly similar visual appearances and carry rich inter-object relationships, which further complicates accurate recognition. To address these challenges, we propose Cross-modal Context-aware Learning for Visual Prompt-Guided Multimodal Image Understanding (CLV-Net). CLV-Net lets users supply a simple visual cue, a bounding box, to indicate a region of interest, and uses that cue to guide the model to generate correlated segmentation masks and captions that faithfully reflect user intent. Central to our design is a Context-Aware Mask Decoder that models and integrates inter-object relationships to strengthen target representations and improve mask quality. In addition, we introduce a Semantic and Relationship Alignment module: a Cross-modal Semantic Consistency Loss enhances fine-grained discrimination among visually similar targets, while a Relationship Consistency Loss enforces alignment between textual relations and visual interactions. Comprehensive experiments on two benchmark datasets show that CLV-Net outperforms existing methods and establishes new state-of-the-art results. The model effectively captures user intent and produces precise, intention-aligned multimodal outputs.