🤖 AI Summary
Visual grounding in remote sensing imagery faces significant challenges due to extremely small object scales and complex geospatial semantics—such as relative positioning, hierarchical structure, and long-range contextual dependencies—in natural language queries.
Method: This paper proposes a tree-structured progressive search framework that formulates localization as an iterative optimization of geospatial hypotheses. It integrates multimodal large language models, explicit spatial relation modeling, hierarchical visual search strategies, and a reinforcement learning–driven reward mechanism to achieve cross-modal alignment and hierarchical spatial reasoning.
Contribution/Results: Evaluated on five remote sensing visual grounding benchmarks, the method achieves substantial improvements in localization accuracy and cross-domain generalization, while providing interpretable, stepwise reasoning paths. Its core innovation lies in the first integration of progressive hypothesis search with geospatial semantic reinforcement learning—effectively bridging fine-grained small-object detection and holistic scene understanding.
📝 Abstract
Recent advances in multimodal large language models(MLLMs) have led to remarkable progress in visual grounding, enabling fine-grained cross-modal alignment between textual queries and image regions. However, transferring such capabilities to remote sensing imagery remains challenging, as targets are often extremely small within kilometer-scale scenes, and queries typically involve intricate geospatial relations such as relative positions, spatial hierarchies, or contextual dependencies across distant objects. To address these challenges, we propose GeoViS, a Geospatially Rewarded Visual Search framework that reformulates remote sensing visual grounding as a progressive search-and-reasoning process. Rather than directly predicting the target location in a single step, GeoViS actively explores the global image through a tree-structured sequence of visual cues, integrating multimodal perception, spatial reasoning, and reward-guided exploration to refine geospatial hypotheses iteratively. This design enables the model to detect subtle small-scale targets while maintaining holistic scene awareness. Extensive experiments on five remote sensing grounding benchmarks demonstrate that GeoViS achieves precise geospatial understanding and consistently surpasses existing methods across key visual grounding metrics, highlighting its strong cross-domain generalization and interpretability.