🤖 AI Summary
This work addresses the prevalent issue of anatomically implausible hallucinations in multimodal large language models (MLLMs) due to their limited geometric understanding in medical perception. To mitigate this, the authors propose Med-Scout, a geometry-aware reinforcement learning post-training framework that operates without expert annotations. Med-Scout leverages three proxy tasks—hierarchical scale localization, topological jigsaw reconstruction, and anomaly consistency detection—to automatically generate geometric supervision signals from unlabeled medical images. Additionally, the study introduces Med-Scout-Bench, the first benchmark dedicated to evaluating geometric reasoning in medical MLLMs. Experimental results demonstrate that Med-Scout outperforms both leading open-source and proprietary MLLMs by over 40% on this benchmark, while also significantly enhancing geometric fidelity and clinical accuracy in radiology and general medical visual question answering tasks.
📝 Abstract
Despite recent Multimodal Large Language Models (MLLMs)'linguistic prowess in medical diagnosis, we find even state-of-the-art MLLMs suffer from a critical perceptual deficit: geometric blindness. This failure to ground outputs in objective geometric constraints leads to plausible yet factually incorrect hallucinations, rooted in training paradigms that prioritize linguistic fluency over geometric fidelity. This paper introduces Med-Scout, a novel framework that"cures"this blindness via Reinforcement Learning (RL) that leverages the intrinsic geometric logic latent within unlabeled medical images. Instead of relying on costly expert annotations, Med-Scout derives verifiable supervision signals through three strategic proxy tasks: Hierarchical Scale Localization, Topological Jigsaw Reconstruction, and Anomaly Consistency Detection. To rigorously quantify this deficit, we present Med-Scout-Bench, a new benchmark specifically designed to evaluate geometric perception. Extensive evaluations show that Med-Scout significantly mitigates geometric blindness, outperforming leading proprietary and open-source MLLMs by over 40% on our benchmark. Furthermore, this enhanced geometric perception generalizes to broader medical understanding, achieving superior results on radiological and comprehensive medical VQA tasks.