🤖 AI Summary
Deep learning in pathological diagnosis suffers from limited interpretability and low clinical trust. To address this, we propose a cell-level multimodal reasoning framework that unifies pixel-level lesion segmentation, expert-level diagnostic report generation, and cross-modal (vision–text) alignment—enabling fine-grained cellular analysis and traceable, evidence-based reasoning. Our method integrates multi-scale visual representations, vision–language joint embedding, and controllable text generation to ensure diagnostic rationales are visually grounded, empirically verifiable, and interactive. Evaluated on the PathGen and GADVR benchmarks, the framework achieves significant improvements: +4.2% mIoU in segmentation accuracy, +18.7% BLEU-4 in clinical relevance of generated reports, and +15.3% CLIPScore in vision–text alignment. This work establishes a novel paradigm for interpretable, clinically actionable AI-assisted pathology diagnosis.
📝 Abstract
Deep learning based automated pathological diagnosis has markedly improved diagnostic efficiency and reduced variability between observers, yet its clinical adoption remains limited by opaque model decisions and a lack of traceable rationale. To address this, recent multimodal visual reasoning architectures provide a unified framework that generates segmentation masks at the pixel level alongside semantically aligned textual explanations. By localizing lesion regions and producing expert style diagnostic narratives, these models deliver the transparent and interpretable insights necessary for dependable AI assisted pathology. Building on these advancements, we propose PathMR, a cell-level Multimodal visual Reasoning framework for Pathological image analysis. Given a pathological image and a textual query, PathMR generates expert-level diagnostic explanations while simultaneously predicting cell distribution patterns. To benchmark its performance, we evaluated our approach on the publicly available PathGen dataset as well as on our newly developed GADVR dataset. Extensive experiments on these two datasets demonstrate that PathMR consistently outperforms state-of-the-art visual reasoning methods in text generation quality, segmentation accuracy, and cross-modal alignment. These results highlight the potential of PathMR for improving interpretability in AI-driven pathological diagnosis. The code will be publicly available in https://github.com/zhangye-zoe/PathMR.