π€ AI Summary
This work addresses the susceptibility of existing remote sensing vision-language models (VLMs) to logical hallucinations in complex spatial reasoning tasks, often caused by reliance on positional shortcuts or broken reasoning chains that decouple the modelβs internal rationale from its final answer. To tackle this, the authors introduce GeoReason-Bench, a logic-driven benchmark, and propose a two-stage training framework. First, supervised fine-tuning injects geometric primitives and expert knowledge into the model; second, reinforcement learning aligns the internal reasoning trajectory with the final prediction through a logic consistency reward mechanism and an option permutation strategy. This approach achieves, for the first time in remote sensing VLMs, verifiable and interpretable joint optimization of reasoning and output, significantly enhancing both cognitive reliability and performance on GeoReason-Bench.
π Abstract
The evolution of Remote Sensing Vision-Language Models(RS-VLMs) emphasizes the importance of transitioning from perception-centric recognition toward high-level deductive reasoning to enhance cognitive reliability in complex spatial tasks. However, current models often suffer from logical hallucinations, where correct answers are derived from flawed reasoning chains or rely on positional shortcuts rather than spatial logic. This decoupling undermines reliability in strategic spatial decision-making. To address this, we present GeoReason, a framework designed to synchronize internal thinking with final decisions. We first construct GeoReason-Bench, a logic-driven dataset containing 4,000 reasoning trajectories synthesized from geometric primitives and expert knowledge. We then formulate a two-stage training strategy: (1) Supervised Knowledge Initialization to equip the model with reasoning syntax and domain expertise, and (2) Consistency-Aware Reinforcement Learning to refine deductive reliability. This second stage integrates a novel Logical Consistency Reward, which penalizes logical drift via an option permutation strategy to anchor decisions in verifiable reasoning traces. Experimental results demonstrate that our framework significantly enhances the cognitive reliability and interpretability of RS-VLMs, achieving state-of-the-art performance compared to other advanced methods.