Cross-modal Context-aware Learning for Visual Prompt Guided Multimodal Image Understanding in Remote Sensing

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Remote sensing image understanding faces two key challenges: (1) generic text prompts often fail to precisely localize user-specific regions of interest; and (2) high inter-class similarity and complex spatial relationships hinder accurate object recognition and description. To address these, we propose the first vision-prompt-driven multimodal remote sensing understanding framework—capable of jointly generating high-fidelity segmentation masks and semantically coherent textual descriptions. Our core contributions include: a context-aware mask decoder; a cross-modal semantic-relation alignment module; and an integrated learning strategy combining vision prompt guidance, cross-modal contrastive learning, relational graph modeling, and a dual consistency loss (semantic + relational). Evaluated on two established remote sensing benchmarks, our method achieves state-of-the-art performance, significantly improving both intent-aligned segmentation accuracy and descriptive fidelity.

Technology Category

Application Category

📝 Abstract
Recent advances in image understanding have enabled methods that leverage large language models for multimodal reasoning in remote sensing. However, existing approaches still struggle to steer models to the user-relevant regions when only simple, generic text prompts are available. Moreover, in large-scale aerial imagery many objects exhibit highly similar visual appearances and carry rich inter-object relationships, which further complicates accurate recognition. To address these challenges, we propose Cross-modal Context-aware Learning for Visual Prompt-Guided Multimodal Image Understanding (CLV-Net). CLV-Net lets users supply a simple visual cue, a bounding box, to indicate a region of interest, and uses that cue to guide the model to generate correlated segmentation masks and captions that faithfully reflect user intent. Central to our design is a Context-Aware Mask Decoder that models and integrates inter-object relationships to strengthen target representations and improve mask quality. In addition, we introduce a Semantic and Relationship Alignment module: a Cross-modal Semantic Consistency Loss enhances fine-grained discrimination among visually similar targets, while a Relationship Consistency Loss enforces alignment between textual relations and visual interactions. Comprehensive experiments on two benchmark datasets show that CLV-Net outperforms existing methods and establishes new state-of-the-art results. The model effectively captures user intent and produces precise, intention-aligned multimodal outputs.
Problem

Research questions and friction points this paper is trying to address.

Enhances multimodal image understanding with visual prompts
Improves segmentation and captioning accuracy in remote sensing
Models inter-object relationships for better target recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual prompt bounding box guides segmentation and captioning
Context-Aware Mask Decoder models inter-object relationships
Cross-modal consistency losses align semantics and relations
🔎 Similar Papers
No similar papers found.
X
Xu Zhang
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China
J
Jiabin Fang
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China
Z
Zhuoming Ding
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China
J
Jin Yuan
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China
X
Xuan Liu
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China
Q
Qianjun Zhang
School of Computing and Artificial Intelligence, Southwest Jiaotong University, Sichuan 611756, China
Zhiyong Li
Zhiyong Li
Professor of Computer Science, Hunan University
computer vision,object detection