GeoViS: Geospatially Rewarded Visual Search for Remote Sensing Visual Grounding

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual grounding in remote sensing imagery faces significant challenges due to extremely small object scales and complex geospatial semantics—such as relative positioning, hierarchical structure, and long-range contextual dependencies—in natural language queries. Method: This paper proposes a tree-structured progressive search framework that formulates localization as an iterative optimization of geospatial hypotheses. It integrates multimodal large language models, explicit spatial relation modeling, hierarchical visual search strategies, and a reinforcement learning–driven reward mechanism to achieve cross-modal alignment and hierarchical spatial reasoning. Contribution/Results: Evaluated on five remote sensing visual grounding benchmarks, the method achieves substantial improvements in localization accuracy and cross-domain generalization, while providing interpretable, stepwise reasoning paths. Its core innovation lies in the first integration of progressive hypothesis search with geospatial semantic reinforcement learning—effectively bridging fine-grained small-object detection and holistic scene understanding.

Technology Category

Application Category

📝 Abstract
Recent advances in multimodal large language models(MLLMs) have led to remarkable progress in visual grounding, enabling fine-grained cross-modal alignment between textual queries and image regions. However, transferring such capabilities to remote sensing imagery remains challenging, as targets are often extremely small within kilometer-scale scenes, and queries typically involve intricate geospatial relations such as relative positions, spatial hierarchies, or contextual dependencies across distant objects. To address these challenges, we propose GeoViS, a Geospatially Rewarded Visual Search framework that reformulates remote sensing visual grounding as a progressive search-and-reasoning process. Rather than directly predicting the target location in a single step, GeoViS actively explores the global image through a tree-structured sequence of visual cues, integrating multimodal perception, spatial reasoning, and reward-guided exploration to refine geospatial hypotheses iteratively. This design enables the model to detect subtle small-scale targets while maintaining holistic scene awareness. Extensive experiments on five remote sensing grounding benchmarks demonstrate that GeoViS achieves precise geospatial understanding and consistently surpasses existing methods across key visual grounding metrics, highlighting its strong cross-domain generalization and interpretability.
Problem

Research questions and friction points this paper is trying to address.

Addresses small target detection in large-scale remote sensing scenes
Models complex geospatial relationships between distant objects
Enhances visual grounding via iterative search and reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive search-and-reasoning process for visual grounding
Tree-structured visual cue exploration with multimodal perception
Reward-guided iterative refinement of geospatial hypotheses
🔎 Similar Papers
P
Peirong Zhang
Aerospace Information Research Institute, Chinese Academy of Sciences
Yidan Zhang
Yidan Zhang
PhD Student, the Chinese University of Hong Kong, Shenzhen
computer visiondeep learning
L
Luxiao Xu
Aerospace Information Research Institute, Chinese Academy of Sciences
J
Jinliang Lin
Aerospace Information Research Institute, Chinese Academy of Sciences
Zonghao Guo
Zonghao Guo
University of Chinese Academy of Sciences
Fengxiang Wang
Fengxiang Wang
National University of Defense Technology
Computer VisionRemote Sensing
X
Xue Yang
Shanghai Jiao Tong University
K
Kaiwen Wei
Chongqing University
L
Lei Wang
Aerospace Information Research Institute, Chinese Academy of Sciences