GeoReason: Aligning Thinking And Answering In Remote Sensing Vision-Language Models Via Logical Consistency Reinforcement Learning

πŸ“… 2026-01-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the susceptibility of existing remote sensing vision-language models (VLMs) to logical hallucinations in complex spatial reasoning tasks, often caused by reliance on positional shortcuts or broken reasoning chains that decouple the model’s internal rationale from its final answer. To tackle this, the authors introduce GeoReason-Bench, a logic-driven benchmark, and propose a two-stage training framework. First, supervised fine-tuning injects geometric primitives and expert knowledge into the model; second, reinforcement learning aligns the internal reasoning trajectory with the final prediction through a logic consistency reward mechanism and an option permutation strategy. This approach achieves, for the first time in remote sensing VLMs, verifiable and interpretable joint optimization of reasoning and output, significantly enhancing both cognitive reliability and performance on GeoReason-Bench.

Technology Category

Application Category

πŸ“ Abstract
The evolution of Remote Sensing Vision-Language Models(RS-VLMs) emphasizes the importance of transitioning from perception-centric recognition toward high-level deductive reasoning to enhance cognitive reliability in complex spatial tasks. However, current models often suffer from logical hallucinations, where correct answers are derived from flawed reasoning chains or rely on positional shortcuts rather than spatial logic. This decoupling undermines reliability in strategic spatial decision-making. To address this, we present GeoReason, a framework designed to synchronize internal thinking with final decisions. We first construct GeoReason-Bench, a logic-driven dataset containing 4,000 reasoning trajectories synthesized from geometric primitives and expert knowledge. We then formulate a two-stage training strategy: (1) Supervised Knowledge Initialization to equip the model with reasoning syntax and domain expertise, and (2) Consistency-Aware Reinforcement Learning to refine deductive reliability. This second stage integrates a novel Logical Consistency Reward, which penalizes logical drift via an option permutation strategy to anchor decisions in verifiable reasoning traces. Experimental results demonstrate that our framework significantly enhances the cognitive reliability and interpretability of RS-VLMs, achieving state-of-the-art performance compared to other advanced methods.
Problem

Research questions and friction points this paper is trying to address.

logical hallucination
spatial reasoning
remote sensing vision-language models
reasoning consistency
cognitive reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Logical Consistency Reinforcement Learning
GeoReason
Reasoning Trajectory Alignment
Remote Sensing Vision-Language Models
Cognitive Reliability
πŸ”Ž Similar Papers
No similar papers found.
W
Wenshuai Li
Aerospace Information Research Institute, Chinese Academy of Sciences
X
Xiantai Xiang
Aerospace Information Research Institute, Chinese Academy of Sciences
Z
Zixiao Wen
Aerospace Information Research Institute, Chinese Academy of Sciences
Guangyao Zhou
Guangyao Zhou
Senior Research Scientist, Google DeepMind
Ben Niu
Ben Niu
Xidian University, PSU, IIE CAS
Wireless Network Security - Applied Cryptography - Privacy Computing
F
Feng Wang
Aerospace Information Research Institute, Chinese Academy of Sciences
L
Lijia Huang
Aerospace Information Research Institute, Chinese Academy of Sciences
Q
Qiantong Wang
Aerospace Information Research Institute, Chinese Academy of Sciences
Yuxin Hu
Yuxin Hu
Stanford University
Medical imagingMRIMachine learning