🤖 AI Summary
Current computational pathology methods suffer from two key limitations: nucleus segmentation relies on predefined categories, and pathological visual question answering lacks region localization capability. To address these, this work introduces PathVG—the first pathological visual grounding benchmark—alongside the RefPath dataset (27,610 images, 33,500 referring expressions with bounding-box annotations). It pioneers the integration of visual grounding into computational pathology by proposing a multi-scale region detection framework infused with domain-specific pathological knowledge. We design a Knowledge Fusion Module (KFM) that aligns large language model (LLM)-derived pathological semantics with visual features, and integrate it with a Pathological Knowledge-enhanced Network (PKNet) and multi-scale feature modeling. Evaluated on PathVG, our method achieves state-of-the-art performance, significantly improving localization accuracy for implicitly expressed medical knowledge.
📝 Abstract
With the rapid development of computational pathology, many AI-assisted diagnostic tasks have emerged. Cellular nuclei segmentation can segment various types of cells for downstream analysis, but it relies on predefined categories and lacks flexibility. Moreover, pathology visual question answering can perform image-level understanding but lacks region-level detection capability. To address this, we propose a new benchmark called Pathology Visual Grounding (PathVG), which aims to detect regions based on expressions with different attributes. To evaluate PathVG, we create a new dataset named RefPath which contains 27,610 images with 33,500 language-grounded boxes. Compared to visual grounding in other domains, PathVG presents pathological images at multi-scale and contains expressions with pathological knowledge. In the experimental study, we found that the biggest challenge was the implicit information underlying the pathological expressions. Based on this, we proposed Pathology Knowledge-enhanced Network (PKNet) as the baseline model for PathVG. PKNet leverages the knowledge-enhancement capabilities of Large Language Models (LLMs) to convert pathological terms with implicit information into explicit visual features, and fuses knowledge features with expression features through the designed Knowledge Fusion Module (KFM). The proposed method achieves state-of-the-art performance on the PathVG benchmark.